Advertisement

Overnight, the global AI community was sharing this farewell tweet.

I'm stepping down. Goodbye, my beloved qwen.

In the early hours of March 4, Lin Junyang, the technical lead of Alibaba's Qwen, suddenly posted an article on X, bidding farewell to the open-source model project he had nurtured.

This tweet instantly ignited the entire global AI open-source community. Just the day before, he and his team had released the Qwen3.5 small-size model series, which received a personal thumbs-up from Musk, and Lin Junyang politely thanked him on X.

Unexpectedly, this turned out to be Lin Junyang's last appearance on Qianwen.

Several key members of Qwen resigned at the same time. A colleague left a message saying: "I'm really heartbroken."

Lin Junyang did not reveal the reason for his departure or his future plans. After his tweet was posted, Chen Cheng (@cherry_cc12), also a member of Qwen, retweeted it and left a meaningful comment:

I'm truly heartbroken. I know leaving wasn't your choice. Just last night, we were releasing the Qwen 3.5 miniature model side-by-side. Honestly, I can't imagine Qwen without you.

This message quickly sparked speculation – “Leaving was not your choice”, implying that Lin Junyang’s resignation may not have been his personal choice.

At the same time, more core members of the Qwen team announced their departure:

Kaixin Li (@kxli_2000) is a graduate of the National University of Singapore and a core contributor to Qwen3.5, Qwen-VL (visual language model), and Qwen-Coder.

He posted a farewell message on X: "Signing off from @Alibaba_Qwen. Grateful for the chance to work with such brilliant minds. Proud of our impact. Onwards and upwards!"

Binyuan Hui (@huybery), a senior researcher at Alibaba, is the initiator of the OpenDevin open-source project and the main technical lead for the Qwen-Coder series of models. His profile has been changed to "former MTS at Qwen".

He has extensive experience in code generation, natural language to SQL conversion, and other fields. He led the development of the Qwen Chat web interface, making the Qwen model easier to use.

Wenting Zhao, a research scientist on the Qwen team, described Lin Junyang's departure as "the end of an era" on X, thanking him for driving Qwen's progress in open-source AI and engineering.

Overnight, Alibaba's core open-source big model team experienced a personnel earthquake, and Lin Junyang's departure has also attracted attention from the global AI community.

Yuchen Jin, CTO of Hyperbolic Labs, recalled working late into the night with the Qwen team on the model launch, saying that Junyang Lin helped Qwen establish close ties with the global developer community.

Tiezhen Wang, Head of Asia Pacific Ecosystem at Hugging Face, described Lin Junyang's departure as "an immense loss" for Qwen.

From a Master of Linguistics from Peking University to Alibaba's youngest P10

Lin Junyang's resume is a typical example of China's new generation of AI technology talents.

Born in 1993, he studied computer science at Peking University for his undergraduate degree, but chose linguistics and applied linguistics at the School of Foreign Languages ​​for his master's degree—this "cross-disciplinary" experience laid the groundwork for his later breakthroughs in the field of multimodal large models.

After graduating with a master's degree in 2019, Lin Junyang joined the Alibaba DAMO Academy Intelligent Computing Lab as a fresh graduate, becoming a member of the M6 ​​multimodal pre-trained model team.

In 2022, he led the development of the general unified multimodal pre-trained model OFA and the Chinese pre-trained model Chinese CLIP. In the same year, he was appointed as the technical lead of Tongyi Qianwen.

In 2025, Lin Junyang, at the age of 32, was promoted to become the youngest P10-level technical expert in Alibaba's history.

Under Lin Junyang's leadership, the Qwen series of models has achieved remarkable results that have attracted industry attention.

  • In August 2023, Qwen was open-sourced for the first time.
  • In 2024, the open-source Qwen2 series, specifically the 72B model, topped the LMSYS Chatbot Arena open-source leaderboard.
  • In 2025, we will launch the Qwen3-Max, a flagship model with trillions of parameters, ranking among the top three globally.
  • In March 2026, the Qwen 3.5 miniature model received praise from Elon Musk.

To date, the Qwen series of models has been downloaded over 600 million times globally, with more than 170,000 derivative models, surpassing Meta's Llama to become the world's largest open-source model family. This marks a significant expansion of the global influence of Chinese open-source AI models.

Model is product

Lin Junyang is not only a technical expert, but also Qwen's "spokesperson" in the global developer community.

On X, he regularly releases model updates, shares benchmark results, and interacts with developers worldwide—in today's AI labs vying for developers' attention, this active public image gives Qwen a rare "human touch" on the international stage.

At the AGI-Next Frontier Summit in January this year, he put forward a rather forward-looking viewpoint:

"Models are products. Today, building basic models is essentially building products. Researchers also need to act like product managers, turning their research findings into real-world usable systems."

In October 2025, he also announced that he would personally build a robotics and embodied intelligence team within Qwen in an attempt to bring models "from the virtual world to the real world".

Qianwen has reached a new crossroads.

Lin Junyang's departure is just the tip of the iceberg of talent loss at Alibaba's Tongyi Lab.

Over the past two years, Tongyi Lab has experienced multiple departures of key personnel:

  • Zhou Chang (former head of Tongyi Qianwen Large Model Technology): He was poached by ByteDance in 2024 with an annual salary of tens of millions of yuan, and Alibaba subsequently filed a non-compete lawsuit against him.
  • Yan Zhijie (former head of the voice team): One of the "hidden masters" of the DAMO Academy, he left the company in 2025.
  • Bo Liefeng (former head of multimodal and vision technologies): left the company in 2025.

It's no wonder that some people jokingly say that Alibaba has gradually become a "Whampoa Military Academy" for cultivating high-end talent in the field of AI.

Just two days ago, Alibaba announced that it would unify its large-scale B-end brand and C-end application brand into "Qianwen," and the name "Tongyi Qianwen" would no longer be used.

Qianwen also just won a victory in the recent Spring Festival AI battle.

According to the latest global AI application data released by the AI ​​Product Ranking, the top three AI applications in terms of MAU (monthly active users) are ChatGPT, Doubao, and Qianwen. Among them, Qianwen has become the world's third largest AI application with 203 million MAU and ranks first in the world with a growth rate of 552%.

This Spring Festival, Qianwen launched a "treating guests" campaign, offering "service" functions such as buying milk tea, ordering takeout, and booking tickets. This attracted 130 million users to place orders on Qianwen with a single sentence, totaling over 200 million times. This means that on average, one in ten people in China placed an order on Qianwen.

According to QuestMobile data, the event attracted over 30 million users in its first two days, boosting Qianwen's daily active users (DAU) from 7.07 million to 73.52 million, a growth rate of 940%. After the Spring Festival, the gap between Qianwen and Doubao's DAU narrowed significantly, stabilizing at around 40 million.

For Alibaba, maintaining Qwen's technological leadership and open-source influence amid the dual pressures of talent loss and organizational restructuring will be a severe test.

Alibaba's Qianwen is standing at a critical crossroads.

#Welcome to follow iFanr's official WeChat account: iFanr (WeChat ID: ifanr), where more exciting content will be presented to you as soon as possible.

ifanr | Original Link · View Comments · Sina Weibo


Are these viral “battlefield live” videos all AI-generated? Here are 5 tips to avoid being fooled.

"I miss the days when images on the internet were always accurate… wait, it seems like there was never such a time."

News of the recent conflict in Iran has been flooding various news feeds with images of explosions, air raid sirens, and other highly impactful scenes. However, a large portion of the "battlefield reports" that have garnered countless likes and shares are actually fake.

▲These videos all garnered over a million views, but were ultimately confirmed to be AI-generated.

Several verified self-media accounts on X published several fake videos generated by AI; however, in their supplementary information, they all mentioned that there were very obvious signs of AI in the videos, such as the smoke effect, the distorted water surface, and the solar panels on the roof.

Some of these videos are from unrelated old conflicts from nine years ago, while others are synthetic illusions manipulated by AI. Most absurdly, Texas Governor Greg Abbott also retweeted a video game video before quickly deleting it.

▲A simulated video game scene; this video post has been viewed over 7 million times | Video source: X@realJoelFischer

This so-called "first-hand conflict video," which has been widely cited on overseas social media, was actually taken directly from a military-themed video game.

It's quite remarkable that people treat not only AI as news, but also game footage. In 2026, with AI's rapid advancements in image and video generation, the age-old internet adage "a picture is worth a thousand words" has become a complete joke.

These posts, which have been shared millions of times, have all been confirmed to be a low-level AI Frankenstein's monster.

Besides the proliferation of videos, another thing that has attracted attention is a satellite image that's gone viral on X. After all, who would spend hundreds of millions to launch a satellite just to Photoshop an image online and fool me?

The image shows a U.S. military radar system in Qatar reduced to rubble after being attacked by an Iranian drone. Even the official account of Iran's mainstream media outlet, the Tehran Times, eagerly shared this "battle result" photo.

▲Image source: X@TehranTimes79

Within just 48 hours, the post garnered over 1 million views. However, open-source intelligence experts quickly exposed the flaws in the image.

After comparison, it was found that this was not a radar base in Qatar at all, but an area in Bahrain. Even more absurdly, the image was forcibly "created" using AI from an old photo from a year ago.

How did you figure it out? Some netizens discovered that the picture is very poorly made upon closer inspection. Although the building looks like it was destroyed, the positions of the parked cars around it are exactly the same as they were a year ago. Even more outrageous is that the angle of light and shadow in the so-called "after the explosion" is exactly the same as the picture taken on a sunny day a year ago.

Defeating AI magic still comes down to these simple five steps.

Although most AI-generated content is currently required to have a visible or digital watermark, this system is still easy to circumvent.

Take the images generated by Nano Banana as an example. The official instructions state that Gemini's logo watermark and a Synth ID digital watermark that cannot be detected by the naked eye will be added. However, on social media, after multiple rounds of manual screenshotting, cropping, and compression, Gemini can hardly recognize the previously embedded watermarks anymore.

▲Methods to bypass the Synth ID watermark are already available on Reddit.

1. Pay attention to details and spot anything that seems off.

Some people ask, since the flaws in those AI videos and images were eventually discovered to be so obvious, why didn't everyone notice them at the beginning?

The reason is actually quite simple: when we look at an AI-generated face, our brains instinctively look for inconsistencies—the shape of the eyes, skin texture, and ears. This is a biological instinct that we have evolved over millions of years.

However, this instinct fails when looking down at a building, road, and terrain photographed from hundreds of kilometers above. Because no one is born knowing what a destroyed radar station "should" look like under a sensor at a specific resolution.

With little information available for reference, these unfamiliar contents fabricated by AI naturally become objective facts in the eyes of ordinary people.

With algorithms now perfectly capable of simulating light and shadow and skin texture, the logic for finding flaws has changed. Beyond simply breaking free from the reliance on a frame of reference and finding technical bugs, the focus has shifted to identifying logical gaps in reality.

For example, inappropriate architectural styles in the background, or minor, illogical actions of the characters.

▲Unverified photo

After Maduro's arrest some time ago, several "captive photos" of him circulated wildly on social media. Foreign media visual investigation teams quickly discovered that these pictures were suspicious: the design of the airplane window did not match the actual aircraft model, and Maduro's clothes were different in the two photos.

Although there is no direct evidence to prove that they are fake, these doubts led the media to decide not to publish the photos.

2. Who sends the message is more important than the message itself.

The identity of the person who posted the picture often tells a more story than the content itself.

The alleged photo of Khamenei's assassination has garnered 5.5 million views on social media, but the account owner here states in the "About" section of their website, "SilverTrade.com is committed to providing the most accurate, insightful, and timely reporting on the precious metals industry."

Even with Maduro's photo being posted on Truth Social, several news outlets still have doubts about its authenticity.

Ultimately, most media outlets chose to quote the entire post as a screenshot rather than presenting the photo alone, a approach that conveyed a sense of "distrust but news value."

3. Track digital footprints; historical records don't lie.

The most common method used by AI to create fake news is to "repurpose" old material. By using reverse image searches on search engines like Google and TinEye, or even by checking image metadata (such as shooting time and device model), it is possible to quickly determine whether the content is fake.

▲https://tineye.com/

For example, this classic manipulated image easily fooled a number of media outlets by simply using a pre-existing photograph and copying and moving it.

4. Verify key background information based on time and location.

If we see a video that claims to have been filmed in a certain location, we can check whether the footage matches that location using Google Maps or satellite imagery.

▲Google Earth provides complete historical images and street views.

You can also use SunCalc to estimate the approximate time of the shot by observing the direction of shadows in the image. If someone claims to have taken the photo last night, but the shadows indicate it was taken at noon, it's almost certainly a fake.

▲ In the photography community, SunCalc is also a geographical website that accurately calculates the positions of the sun and moon to find the golden hour for photography.

5. Utilize in-depth research to enable AI to fight AI.

Almost all AI tools now have their own deep learning capabilities. For example, when we summarized the AI ​​battle during the Spring Festival, we had ChatGPT's deep learning function run for half an hour to summarize this information for us.

The advantage of in-depth research is that every sentence generated by AI comes with a source link, allowing you to directly see where the information comes from and what its nature is. If we require high data accuracy, we can also add the following prompt: "For each conclusion, provide a credibility assessment."

However, one thing to note is that in-depth research may be reliable, but general Q&A is not.

Asking an AI directly, "Is this news true?" sometimes results in it conflating casual speculations posted on social media with official reports, giving us a seemingly plausible but incorrect answer. In-depth research, at least, allows you to access the original information source and make your own judgment.

▲Can you tell which of these two pictures is real?

For example, when we directly feed these two images to AI and ask, "Was this image generated by AI?"

Gemini said that both images are very likely based on the same original image, and were generated through post-processing or AI color replacement. ChatGPT and Doubao told me that the red image is more likely to be AI-generated.

There are now many specialized image tampering detection tools available. A few days ago, some netizens specifically tested more than ten AI content detection tools on the market (including hivedetect.ai, aioornot.com, copyleaks.com, and some general AI tools), and the results of over 1000 tests showed…

Magic cannot defeat magic; using AI to detect AI is a doomed fantasy.

▲Image source: NYT article (These Tools Say They Can Spot AI Fakes. Do They Really Work?)

AI detection tools can serve as a reference; they can give us a direction, but they cannot make direct judgments.

When Adobe celebrated the 25th anniversary of Photoshop, they launched a website for testing the authenticity of images. Those who are interested can check it out. Back then, with only Photoshop, it was already possible to make some images difficult to distinguish, let alone with today's powerful AI.

▲ How to distinguish between photoshopped and real images: https://landing.adobe.com/en/na/products/creative-cloud/69308-real-or-photoshop/index.html

"Let the bullets fly for a while."

In response to the recent proliferation of AI-generated fake images and fake news, social media platforms have begun to take action.

Starting today, creators on the X platform who upload AI-generated videos without labeling them "AI-made" will have their "Creator Revenue Sharing Program" suspended for 90 days. If they violate this rule again, they will be permanently barred from earning advertising revenue from the platform.

X's platform revenue sharing has always been considerable, and many AI-driven self-media outlets update simultaneously on X. At the beginning of the year, X also updated its content incentive program, allocating revenue based on the number of times content appears on the homepage, while also encouraging the creation of long-form articles.

▲Nikita Bier, product manager of X, posted that she will be modifying the creator revenue sharing mechanism.

The policy announcement sparked outrage among creators and users on X. Some supported it, saying, "Finally, something's being done!" But others questioned, "Why only target conflict videos? Doesn't fake content in other areas cause various harms?"

I suspect that even if these measures cover fake news across various sectors, the actual effectiveness will likely be less than optimistic. After all, users can easily repost using other accounts, and the platform's content moderation is far from keeping up with the speed at which fake images spread.

In an article by The Verge interviewing a fake news expert, it was mentioned that "ordinary people must be aware that the current digital environment is inherently biased towards manipulation and deception."

The bigger problem now seems to be that we are still not vigilant enough about AI-generated fabrications. But as ordinary people, it would be too much trouble to fact-check every single news story.

Being patient might be a simpler approach. The line from Jiang Wen's movie, "Let the bullets fly for a while," represents the most sober and unconventional way we can be under the manipulation of algorithms.

#Welcome to follow iFanr's official WeChat account: iFanr (WeChat ID: ifanr), where more exciting content will be presented to you as soon as possible.

ifanr | Original Link · View Comments · Sina Weibo


The Shangjie and Zunjie models both see price increases! The Shangjie Z7’s interior, featuring a first-ever pixel-level LiDAR, has also been revealed.

In less than two years since its launch, the (Wenjie M9) has delivered more than 280,000 units… Its sales volume has surpassed that of four of the best-selling BBA (BMW, Mercedes-Benz, Audi) models, meaning it's outselling four of them.

At the beginning of the press conference, Yu Chengdong spoke very proudly about the market performance of the flagship model M9.

The other flagship model, the S800, also achieved equally strong sales, surpassing the combined sales of several traditional luxury giants in the same period in the high-end market. To paraphrase Yu Chengdong's words on stage, the S800 also achieved "one against three".

However, time is fair to all automotive products. The Wenjie M9 has been on the market for quite some time now, and with the rapid advancement of the industry as a whole, its once dazzling specifications are being quickly leveled off by its competitors.

As refrigerators, color TVs, comfortable sofas, and even advanced driver assistance systems become more common and standard features in competing products at the same price point, the early competitive advantage will inevitably be diluted. When the differences in surface-level features become smaller, the real gap can only be widened again at the system level.

The HarmonyOS Intelligent Mobility Technology Upgrade Launch Conference this afternoon is a perfect response to this competitive pressure.

Alongside the 2026 Wenjie M9 and Zunjie S800, a new generation of 896-line dual-optical-path pixel-level LiDAR and comprehensively upgraded active safety technology were also launched.

Prioritizing the use of the latest and most expensive hardware in flagship models is a common business practice. HarmonyOS's underlying updates today aim to inject new experiential value into these two core products and establish their technological boundaries for 2026.

The progression from "interaction" to "security"

At the beginning of today's press conference, Huawei spent a considerable amount of time showcasing the various features of the S800 and M9, including but not limited to:

Features include an intelligent tracking system where the cockpit lights follow your movements, the TuLing Dragon platform that reduces the S800's turning radius to 5.05 meters, and the Huawei Qiankun ADS 4.1 system that supports three-point U-turns.

Some readers may find these features familiar, especially S800 and M9 owners. In fact, many of the features mentioned at the launch event are already present in the current models. The real incremental information from this technology upgrade launch event is actually the following.

Let's look at two practical interactive updates. First, you can now control your car directly with your Huawei smartwatch; a simple pinch of two fingers can open or close the trunk, among other operations. Second, there's an update to in-car gesture control: sitting in the driver's seat, simply wave your hand towards the passenger door, and it will close automatically, saving you the trouble of reaching for the door. The same applies to the rear seats.

The system's capabilities have also been significantly improved in handling extreme situations at high speeds.

The maximum intervention speed for tire blowout stability control has been increased from 120 km/h to 130 km/h. After stabilizing the vehicle's attitude, the system no longer brakes directly to a stop in the current lane as before; instead, it automatically seeks a safe opportunity to pull over, significantly reducing the risk of secondary accidents.

The dynamic side wing support inside the car has also been optimized, with two new sensitivity adjustments added, providing a more tailored support level based on individual driving habits and road conditions. The voice assistant, Xiaoyi, has also learned various regional dialects, and it is expected that in a May update, users speaking Shanghainese, Cantonese, and Sichuanese will be able to communicate smoothly with Xiaoyi using their most familiar pronunciations.

The absolute highlight of the entire press conference was that brand-new LiDAR.

HarmonyOS has always been aggressive in its use of advanced sensing hardware.

In 2023, the Wenjie M9 was the first in the industry to launch a 192-line LiDAR; at the end of last year, the Zunjie S800 launched a high-precision solid-state LiDAR and a distributed 4D millimeter-wave radar matrix. Today, that number has soared to 896 lines, making it the world's highest-spec mass-produced automotive LiDAR.

Before this radar officially debuted, Huawei first presented a report card:

During the recent nine-day Spring Festival holiday, HarmonyOS users achieved a peak daily safe driving mileage exceeding 60 million kilometers. With assisted driving enabled, the average safe distance between vehicles and serious collisions was 3.95 times that of human drivers. Just a month ago, this figure was 3.58 times higher. The data is constantly being updated.

Jin Yuzhi, CEO of Huawei's Intelligent Automotive Solutions BU, said on stage:

To date, we are the only ones in the entire industry that publish monthly security reports.

The security data is refreshed frequently, and its underlying logic stems from a smart brain and precise perception.

In terms of system architecture, Huawei adopted an architecture called WEWA. They introduced a world model into the vehicle and had sufficiently intelligent algorithms. The next step was to solve the problem of the system "seeing clearly".

The sensors on the vehicle each have their own specialties. Cameras are passive sensors, like human eyes, very accurate at detecting colors and outlines, but prone to errors in bright light or darkness. Millimeter-wave radar and lidar are active sensors, emitting electromagnetic waves to detect objects. Millimeter-wave radar has a longer wavelength, easily penetrating rain, fog, and dust, making it excellent for ranging and speed measurement, but unfortunately, it struggles to depict the fine edges of objects.

LiDAR perfectly fills this gap in the puzzle. The new generation dual-optical-path image-level LiDAR released today has ushered in the era of 3D imaging for automotive perception, moving beyond the era of 3D point clouds.

Bringing in-vehicle perception into the imaging era

In the past, LiDAR systems had limited beamwidth, and the outlines of scanned objects were pieced together from sparse dots. In complex environments, the information returned from obstacles ahead was often just a blurry mass of pixels.

To address the accuracy issue, Huawei has introduced an integrated dual-focus, dual-optical-path architecture. Simply put, they've packed two independent receiving units—one wide-angle and one telephoto—into a single radar housing.

Jin Yuzhi gave us an example:

When a vehicle approaches a complex intersection, both wide-angle and telephoto lenses work simultaneously. The wide-angle lens is responsible for monitoring the overall road conditions, while the telephoto lens focuses on distant details directly ahead. This high-definition "picture-in-picture" presentation method actually replicates the instinctive reaction of human drivers: using peripheral vision to observe the surroundings while keeping the absolute visual focus on what is ahead.

Compared to the previous 192-line LiDAR, the resolution has been increased fourfold, resulting in a significant leap in image detail. At the launch event, Jin Yuzhi showed footage of a test conducted 55 meters away in complete darkness. The new radar clearly captured the outlines of a pedestrian and three small dogs, even accurately detecting the subtle movements of the dogs wagging their tails.

This sophisticated perception capability is primarily designed to deal with challenging situations on highways, such as rubber fragments left behind by trucks at night, overturned traffic cones, or extremely difficult-to-discern, low-reflectivity, irregularly shaped objects.

According to Jin Yuzhi, this new radar can reliably identify obstacles higher than 14 centimeters at a distance of 120 meters.

The 14-centimeter setting is carefully considered. He explained that most passenger vehicles currently have a ground clearance of more than 14 centimeters. If the clearance is lower than this, the wheels can basically step over it. If the clearance exceeds this, the chassis and battery pack will face a real risk of physical collision.

With enhanced sensing capabilities, the outer casing has also been reinforced. The radar surface is covered with a specially designed tempered glass window. Huawei conducted an extreme test, enduring a 30-hour, 3,000-kilometer sandstorm environment, after which the radar surface remained intact.

Regarding the development process of this radar, Yu Chengdong remarked on stage:

We started developing this lidar several years ago. The development cycle was very long and difficult, and it has only been on the market today. It took several years.

He stated that the previous 192-line radar could also detect small obstacles that had fallen off the road, but the system typically chose not to intervene.

Because the point cloud data from this lidar is not reliable enough, the system dares not adopt it to prevent false detections and potential malicious braking.

After replacing the radar with an 896-line sensor, the system's capabilities were greatly enhanced, and it finally dared to make decisions under extreme conditions. In the on-site test video, the test vehicle, traveling at 120 km/h, cleanly and efficiently completed a series of emergency avoidance maneuvers when faced with multiple irregularly shaped obstacles and overturned tires appearing on the road.

The introduction of the latest and most advanced hardware also brings changes to the final price.

The press conference concluded with specific figures: the Zunjie S800, equipped with a brand-new LiDAR, starts at 728,000 yuan, an increase of 20,000 yuan compared to the previous 192-line version; the starting price of the Wenjie M9 also rose to 479,800 yuan, a similar increase of 10,000 yuan.

ONE MORE THING: Shangjie Z7 and Z7T

After announcing the prices of the two flagship models, the press conference was actually nearing its end. But in the final stage, the big screen displayed a "One More Thing" that was outside the usual procedure.

HarmonyOS has shifted its focus from high-end business and family travel to younger consumers, releasing a significant amount of information about the Shangjie Z7. Yu Chengdong stated that the "Z" in Z7 represents the young, core group of Generation Z. The number 7 indicates its physical dimensions.

This is a mid-to-large-sized sedan, following the classic "532" body shape, with a body length of 5 meters, a wheelbase of 3 meters, and a width of nearly 2 meters. The large size base establishes a relatively spacious frame for this car.

As a tech-savvy coupe targeting the young market, the Shangjie Z7 comes out with a unique exterior design – an exclusive color scheme called "Electric Purple Pink".

In addition to its exterior design, the in-car cockpit interaction also features two novel hardware configurations.

First, there's the HarmonyOS-enabled 4D screen, a first for HarmonyOS: when a passenger calls out to the voice assistant Xiaoyi, the central control screen automatically turns towards the person speaking. When the driver gives a command, the screen faces left; when the passenger selects a song, the screen immediately turns to right.

Another new feature is the "Inspiration Showcase" in front of the passenger seat. While the launch event didn't provide a detailed demonstration of its operation, the official description defines it as a dedicated space for expressing hobbies and attitudes, allowing users to maintain a highly customized display area within the vehicle.

In pursuit of a coupe-like fastback design, vehicles often have to make certain compromises in rear headroom and trunk capacity. Considering the strong demand from some users for loading capacity, HarmonyOS launched a derivative model—the Shangjie Z7T—while developing the Z7.

The letter T at the end clearly identifies it as a shooting brake. It retains the front design of the coupe version, but extends the roofline smoothly to the rear, resulting in more spacious cargo space.

Yu Chengdong revealed that this combination of good looks and practicality will be officially launched independently at the end of this month.

Follow us for anything on wheels, and feel free to discuss. Email: tanjiewen@ifanr.com

#Welcome to follow iFanr's official WeChat account: iFanr (WeChat ID: ifanr), where more exciting content will be presented to you as soon as possible.

ifanr | Original Link · View Comments · Sina Weibo


Having seen all the “AI PCs,” it turns out the Mac has always been here | AI Gadgets

At the beginning of the year, the Mac Mini was out of stock, with waiting times reaching as long as a month and a half.

The Mac mini is a great product, that's something everyone knows. With competitive pricing through domestic channels and excellent performance from the M chip, the entry-level configuration can be had for under 3,000 RMB, making it a perfect main machine for creative beginners.

However, the recent surge in popularity of the Mac mini has little to do with creative work or everyday use.

Those who follow tech news should know what's going on: OpenClaw (formerly known as Clawdbot) has suddenly become popular.

OpenClaw offers multiple deployment options: you can install it on your own computer or dedicate a separate computer to it; deploying it in a cloud-based virtual machine/sandbox environment is also fine; later, some mainstream AI services also launched cloud-based one-click deployment alternatives, significantly lowering the barrier to entry for novice users.

However, in the early stages, the most common deployment option was to buy a single Mac mini.

The reason is definitely not because it is cheap, but more importantly: for OpenClaw to be meaningful, it needs to be given a "physical body" so that it can access files and operate software.

A cloud server can run OpenClaw, but it's still not your computer. It doesn't have your files, software, or the various accounts logged into your browser, and there's no so-called "context." A Mac mini can sit on your desk 24/7 without needing to be turned off, and you don't even need a separate monitor if you can remotely control it via a chatbot.

The only significant cost of using OpenClaw on your own computer is the token fee for the large model API accessed on the backend; many early adopters have suffered losses because of this. However, if you buy a high-spec Mac mini and download a sufficiently large model to run locally, it's practically like getting free labor, aside from electricity and internet costs…

A MacBook is fine too, but...

A MacBook is fine too, but…

According to reports from Tom's Hardware and TechRadar, after OpenClaw gained popularity, the waiting time for the 24GB and 32GB Mac mini configurations has increased to between 6 days and 6 weeks; the delivery time for the more powerful Mac Studio has also increased from two weeks to nearly two months.

These waiting times are the votes cast by early OpenClaw players using real purchases.

(Note: The shortage of some models is also related to Apple's recent launch of new Mac desktop computers. In the past, older models would sell out as the new model was about to be released. The popularity of OpenClaw is not the only reason.)

As if by some strange twist of fate, the Mac has become the top choice for " AI PC" in 2026; on the contrary, the Windows PC industry, which has been touting "AI PC" for several years, has not benefited at all.

Chipmakers like Intel, AMD, and Qualcomm, along with mainstream PC brands, have been marketing the concept of "AI PCs" since 2023. Many of these latest Windows computers are certified Copilot+ PCs, boasting impressive GPU and NPU performance, and some are even significantly cheaper than equivalent Macs.

But the question is, why are people still flocking to Macs?

Why a Mac?

The debate over whether Windows PCs or Macs are better will never have a definitive answer. However, when it comes to AI development, Macs have become the unspoken choice.

While the "brain" of the large model resides on cloud servers, the developers' hands are on Macs. This has little to do with the Mac's form factor or user experience: the key is that macOS has UNIX roots.

The core functions of an AI Agent include manipulating files, calling command-line tools, scheduling APIs, and even controlling graphical interfaces. To put it more simply, the Agent is an intelligent and automated "script engineer," except that the scripts are generated in real-time by a large language model. macOS, being a UNIX-like system, has excellent native support for bash and zsh commands.

This solves the most basic environment setup problem in AI development. On Windows, you might need to install a WSL2 virtual machine first. But on Mac, everything from the Python environment to the complex C++ compilation toolchain is basically ready to use out of the box. Package managers like Homebrew make installing various tools and dependencies a simple matter of a single command.

Additionally, macOS complies with the POSIX standard, offering slightly higher reliability when handling file paths, multi-threaded tasks, and network protocols. Agents often need to frequently read and write data and call APIs; efficient system-level scheduling allows agents to operate at a faster pace on a Mac.

This native feel and stability allow developers and early adopters to get started more quickly and spend more time on actual agent orchestration.

Windows has WSL and PowerShell, which cover most of the functionalities. However, WSL is a compatibility layer built on top of Windows, and it suffers from legacy issues such as path conventions, registry mechanisms, and permission models. Therefore, there will indeed be more friction between AI models and agent projects running on Windows.

Taking Ollam and LM Studio as examples, these two tools make it as simple as "download, install, and run" for edge inference of large models. The Windows version of Ollam was released six months later than the macOS version; although LM Studio has supported both platforms from the beginning, the Mac version has always had a better reputation in the community; the same is true for OpenClaw.

Delving deeper into the hardware level, memory is the lifeblood of reasoning and execution in large language models.

Taking OpenClaw as an example again, users can access cloud models by paying with tokens, but its strength lies in driving model inference on the client side. According to general research, in order for OpenClaw to work like a person with a normal IQ, the minimum number of backend model parameters is around 7 billion, and it often needs to reach at least 32 billion parameters to work relatively stably.

Even after 4-bit quantization, such a large model still requires approximately 20GB of memory (some of which needs to be reserved for the context window).

At this point, the architecture of Windows PCs becomes inadequate. Physical isolation exists between CPU memory and video memory, and data is transferred via the PCIe bus, making it susceptible to bandwidth bottlenecks. Frequent data transfers can impact the speed of the inference process.

Not to mention, large models generally rely on GPUs for accelerated inference, requiring sufficient video memory to hold them. Among NVIDIA's consumer-grade graphics cards, only those with 24GB of video memory (90 series) meet the configuration requirements, but the total cost of building a complete system (considering only new machines) would be at least 10,000 RMB, and with a new card, it would soar to 40,000 to 50,000 RMB.

Apple's Unified Memory Architecture allows Macs with M-series chips to handle larger-scale models with ease when performing inference on the device.

In simple terms, the effect of a unified memory architecture is that the CPU, GPU, and neural computing engine can share the same memory pool, eliminating the overhead of physical bus transfers. This allows Macs to achieve extremely high memory bandwidth and provides better performance for multi-machine interconnection.

Taking the Mac mini as an example, choosing the higher-performance M4 Pro processor, paired with 48GB of memory, and selecting the basic configuration for the rest, the total price of the machine is around 13,000 yuan, which can reach the configuration level of the 32 billion parameter model generally recommended by the OpenClaw community.

Of course, this is only a professional configuration that requires high token throughput. If you are an enthusiast and just want to try out OpenClaw, you can run it with a standard M4 chip and 32GB of RAM.

Of course, this cost comparison is based on the premise that it's dedicated to edge inference/running OpenClaw, rather than being used as a primary machine. A similarly priced Windows PC can also be used for gaming and video editing, offering greater versatility.

Furthermore, Mac's unified memory and the dedicated VRAM of a PC platform's graphics card are not the same thing. Unified memory is shared by the system and the model; even on a Mac mini with 32GB of RAM, the macOS system and other software still require several gigabytes of memory. On the other hand, the RTX 3090's dedicated VRAM allows the model to use all of it, and it can even run larger quantization models in conjunction with the CPU and memory.

If you only use the cloud API as the core of OpenClaw and don't consider edge deployment, then the ease of use of Mac still holds an advantage.

In addition, although CUDA provides a unified memory programming interface, the CPU memory and GPU memory are still physically separate, and data transfer and bandwidth bottlenecks have not been eliminated.

Next, let's look at power consumption.

The agent operates in a continuous loop: task triggering, reasoning, execution, waiting, and then triggering again. A Windows PC with the aforementioned configuration would run at around 300-400W (local deployment), and the heat dissipation, noise, and electricity costs are not insignificant.

The Mac mini typically has a stable power consumption of around 10-40W, with a peak power of 65W (M4) or 155W (M4 Pro). Its heat dissipation is controllable, with almost no fan noise, resulting in quieter operation. This low-latency, low-power continuous operation creates a subtle difference in user experience.

A user-created 3D-printed kit called "Clawy MacOpenClawface"

A Mac mini shell kit 3D printed by a netizen, named "Clawy MacOpenClawface".

Of course, our discussion will focus more on OpenClaw, a scenario primarily driven by reasoning. If your work involves local fine-tuning and you prioritize efficiency, then on the macOS platform, you'll often need Mac Studio, or at least a top-of-the-line MacBook Pro, to even begin to grasp the basics.

At the same time, the fact that Macs don't support CUDA is something that may never change. However, CUDA's real battleground is model training; inference scenarios rely on it much less, since Apple has MLX as its trump card for inference (which will be discussed in detail later).

Returning to OpenClaw: its creator, Peter Steinberger, has publicly stated that he prefers Windows and finds it more powerful. In the Lex Fridman podcast, he said that the Mac mini is not the only "physical" option, and that running OpenClaw via WSL2 is already very mature; he even publicly criticized Apple for "messing up" in the field of AI and expressed dissatisfaction with the closed nature of Apple's ecosystem.

Objectively speaking, for users with limited technical skills, the Mac mini is indeed the most worry-free and easiest-to-use solution for deployment. The main reason is its power consumption, quiet operation, and small size, making it like a "server node" that can be plugged into a corner, is on standby 24 hours a day, and requires no maintenance.

Another example related to power consumption: A few days ago, an engineer named Manjeet Singh successfully reverse engineered the Neural Engine (ANE) on the M4 processor and found that the ANE has extremely high power efficiency: its efficiency is as high as 6.6 TOPS/W when the computing power is fully utilized.

Compared to Apple's M4 GPU, which is approximately 1 TOPS/W, Nvidia's H100 is about 0.13, and its A100 is 0.08 TOPS/W.

To put it in perspective, the throughput of a single A100 card is 50 times that of the M4 ANE, but the power consumption of the M4 ANE is 80 times that of the A100. The original author wrote in the article: "For edge inference, the performance of the ANE is outstanding."

Let's start with the neural engine

In 2011, Apple first implemented real-time face detection and other functions that were later regarded as AI tasks by hard-writing in the image processing unit (ISP) of the A5 processor.

In 2014, Apple acquired PrimeSense and began developing a new coprocessor specifically for neural network computing. This work was realized three years later in the iPhone X: the A11 Bionic processor incorporated the aforementioned Neural Engine (ANE), with a computing power of only 0.6 TOPS, to drive Face ID and Portrait mode.

At that time, AI hadn't yet reached the era of large-scale models; it mainly relied on various machine learning algorithms. The market didn't react much to Apple's launch of this coprocessor. But Apple never gave up and continued to invest heavily.

Three years later, the M1 was released, along with a unified memory architecture, and ANE was also introduced to the Mac. The more ample power budget for desktop platforms allowed ANE's computing power to jump to 11 TOPS. Subsequent generations saw further improvements: M2 at 15.8 TOPS, M3 at 18 TOPS, M4 at 38 TOPS, and by the end of 2025, M5 had reached 57 TOPS. From M1 to M5, Apple's ANE computing power increased more than fivefold.

Other PC manufacturers can't help but envy the logic behind this growth. Before Apple added AI acceleration hardware to Macs, tens of millions, even hundreds of millions, of iPhones were already running the same ANE architecture. Power consumption performance, stability, and edge cases under extreme conditions had already been verified on commercially available models, and then transferred to Macs.

Intel and AMD have virtually no consumer-grade presence in the mobile market; while Qualcomm has also put Snapdragon chips into hundreds of millions of Android phones, it is merely a chip supplier. AI on Android is developed by Google (Gemini) and major phone manufacturers in collaboration with third-party AI labs; Windows AI (Copilot) is developed by Microsoft.

Apple's difference lies in its vertical integration, controlling both hardware and software. Other chip manufacturers do not have this unified control.

Of course, inferring large language models on a Mac has little to do with ANE; it's better suited for AI tasks with fixed patterns, such as Face ID and facial recognition. The GPU handles the majority of the computation.

(Note: The situation has recently undergone slight changes. First, the ANE on the M-series chips now handles the prompt injection prefill stage; and regarding the M4 ANE reverse engineering mentioned earlier , the engineer also implemented a method to skip CoreML and directly call the ANE, significantly improving throughput . Following this line of thought, it might be possible to find a general method to directly utilize ANE to accelerate inference and even training.)

In late 2023, Apple open-sourced MLX, providing developers with a model inference framework specifically optimized for the M-series chips. Last year, the basic model framework was released with Apple Smart, allowing app developers to access the system's built-in basic models on iPhones and Macs without needing an internet connection and without data leaving the device.

Apple's repeated delays in developing AI are undeniable. However, it's also an undeniable fact that Apple began experimenting with AI as early as 10 years ago, laying the foundation for desktop AI development many years ago.

On the Windows side, the term " AI PC" won't start appearing in press releases and presentations from Intel, AMD, and PC manufacturers until the end of 2023.

Screenshot from AMD's official website in 2023

Screenshot from AMD's official website in 2023

In May 2024, Microsoft released the Copilot+ PC certification system, with its flagship feature called "Recall". The basic logic is that the system continuously takes screenshots of the screen content, and then Windows' system-level AI can help you recall what you have seen in the past.

Regardless of the actual significance of this feature at the time of its release, its security was first found to have serious problems: just one month after its release, researchers discovered that the Recall feature stored all screenshots in an unencrypted local plaintext database.

Microsoft abruptly removed the Recall feature. Six months later, Microsoft released a beta version again, but it was delayed once more due to new security issues. Recall was finally officially launched in April 2025, but it was switched to being disabled by default, and data was stored in encrypted form when enabled.

From the initial announcement to actual usability, it took nearly a year. It's fair to say that the flagship feature of the entire Windows ecosystem's AI PC underwent a complete redesign, a process no less awkward than the repeated leaps and bounds of Apple's AI/new Siri. However, perhaps because the Windows ecosystem's voice is so low, few people have paid attention to AI PCs, and many have never even heard of it.

Regarding the certification standards for the Copilot+ PC system, Microsoft primarily targets the Neural Processing Engine (NPU), requiring 40 TOPS. However, this computing power is used for narrow consumer-facing tasks such as real-time captioning, background blurring, and photo enhancement; large-scale language model inference is never within its scope (similar to Apple's ANE).

When developers attempt to perform large-scale language model inference on the device, they find that although these computers are called AI PCs, they are not optimized for AI inference purposes. Microsoft Copilot's core computing power comes from the Azure cloud, and is almost unrelated to the computing power on the device itself. For users who have purchased a Windows AI PC, the most noticeable AI improvement is probably real-time captioning and automatic photo classification.

When it comes to edge inference, there is another key factor: the optimization paths in the Windows AI ecosystem are fragmented.

NVIDIA GPUs use CUDA and TensorRT, Intel NPUs use OpenVINO, Qualcomm NPUs use the QNN SDK, and AMD NPUs use their own driver stack. Model storage formats are also quite fragmented, with a general format for CPU+GPU inference (GGUF, more accurately CPU inference + GPU hierarchical offloading) and a GPU-only format (EXL2).

This means that running models and model-driven functionalities on Windows AI PCs will be more complex in terms of the inference backend. Microsoft has ONNX Runtime and DirectML (which is currently in a state of renewal) as a unified abstraction layer, but the cost of unification is sacrificing the peak performance of each vendor. Apple is currently the only PC vendor that has developed and continuously maintains an LLM inference framework specifically for its own PC hardware; this framework is MLX.

On open-source model platforms like Hugging Face, you can easily find a large number of models that use the MLX framework. As long as they have the MLX suffix and your memory/processor allows, they can be used "out of the box".

However, the recent departure of Awni Hannun, one of MLX's key contributors, from Apple has introduced some uncertainty into the project's future development. Hannun also stated that the MLX team still has many excellent employees, so there's no need to worry.

Our own experience

Over the past year, iFanr has conducted numerous tests on deploying AI models on edge devices and has also interviewed some external developers. Two instances are worth mentioning.

Last Chinese New Year, DeepSeek burst onto the scene, and the new Mac Studio was released shortly after. We ran the DeepSeek R1 671B model (note: in reality, only memory is needed, the hard drive doesn't need to be that large; a 1TB SSD model costing over 70,000 RMB would suffice) and the distilled 70B version on an M3 Ultra Mac Studio (512GB + 16TB) priced at nearly 100,000 RMB.

Our conclusion at the time was that a 70B processor was sufficient for everyday edge-deployed dialogue, and spending tens of thousands of dollars on a machine just to chat with AI was simply a waste of money. The model capabilities at the time were indeed not very good; it was only later that new multimodal models and agent capabilities emerged.

However, the fact that the massive number of parameters in the 671B model can be used for edge inference on a desktop machine is still a remarkable feat. On a 512GB unified memory, the 671B model occupied 400GB. With the context, the macOS system itself, and other tasks, it was almost at full load, but the machine ran quietly throughout, with noise levels within the normal range and no overheating.

In traditional AI infrastructure logic, this scale of parameters falls under the data center level, and consumer-grade hardware shouldn't theoretically appear in this scenario. But that M3 Ultra Mac Studio actually appeared quietly nonetheless.

Later, we interviewed Exo Labs, a startup team from Oxford University in the UK. They used four Mac Studios with 512GB of uniform memory to form a computing cluster with 128 CPU cores, 320 GPU cores, 2TB of uniform memory, and a total memory bandwidth of over 3TB/s.

The team developed the Exo V2 scheduling platform for this Mac cluster, which can load two DeepSeek models (V3+R1, 8-bit quantization) simultaneously. Not only can the two models infer in parallel, but researchers can also use QLoRA technology to perform local fine-tuning, significantly reducing the training time. The entire system's power consumption is kept below 400W, and there is virtually no fan noise during operation.

The traditional solution with equivalent computing power would require about 20 NVIDIA A100s, costing more than 2 million RMB at the time; in contrast, the total cost of Exo Labs' solution was only 400,000 RMB (similarly, the SSD was significantly overkill, so it could actually be under 300,000 RMB).

The founder of Exo Labs told us at the time that Oxford had its own GPU cluster, but applications required waiting in line for months, and only one card could be applied for at a time. These constraints forced them to innovate, and they happened to find the right tools: a unified memory architecture, MLX, and Mac computers.

In our article at the time, we wrote: "If Nvidia's H-series graphics cards are the pinnacle of AI development, then Mac Studio is becoming the Swiss Army knife in the hands of small and medium-sized teams."

Apple actually knew about this a long time ago.

What is a true AI PC?

Last year, Apple released the Basic Model Framework, which allows iOS and macOS developers to call the system's built-in basic models with zero network latency, zero API fees, and data without leaving the device.

Although Apple's modeling team nearly disintegrated later on , Apple didn't stagnate in its iterations. It always knew where developers were and what they wanted. Its response was to integrate large-model-driven AI capabilities into the operating system's infrastructure, making them easier for developers to utilize.

Last week, Apple open-sourced the python-apple-fm-sdk. Previously, complete testing and optimization of Apple's basic modules required a Swift environment; now, this SDK broadens the path, allowing developers accustomed to Python workflows to participate as well.

Apple's privacy design philosophy is consistent throughout: the underlying models called by the python-apple-fm-sdk run entirely locally, and the data never leaves the device. In scenarios where Apple's entire AI system must be deployed to the cloud, it uses Private Cloud Compute, where data is processed and then deleted, and Apple has no access to it.

Conversely, Recall also allows AI to access users' private data, but the first version stored it in an unencrypted plaintext database. One approach prevents leaks through its architecture, while the other only patches the data after an incident occurs.

However, the advantage of Mac as an AI development and deployment tool is more like an "adaptability advantage," or something that was acquired unexpectedly.

This means that Apple initially developed the Neural Engine to serve Face ID and Portrait mode; the unified memory architecture was a necessary step to break free from its long-standing dependence on Intel; and the open-sourcing of MLX was a response to developers' demand for efficient inference tools. The explosion of AI Agent scenarios, which the Mac happened to be able to capitalize on, was an unexpected benefit of these and many other unmentioned engineering decisions.

The Mac wasn't initially designed for AI; its product positioning has always been closer to that of a "creator's tool." Apple's long-term target users have been video editors, artists, and software engineers. They need machines with low noise, sustained performance, large memory capacity, and the ability to run around the clock.

AI model inference and the currently popular Agent deployment just happen to require the exact same thing.

Looking back, when Apple heavily invested in machine learning more than a decade ago, it most likely couldn't have foreseen the explosive popularity of OpenClaw in 2025. You could even argue that ten years ago, Apple probably wouldn't have liked OpenClaw, a platform that seemed to offer "high returns and even greater opportunities," where users' privacy and data security were disregarded, and various software engineering regulations were ignored once the illusion took hold…

But how to put it? Even if Apple doesn't like it now, it has no choice. Like Murphy's Law, perhaps some things were destined from the start. Every card Apple has played over the years, whether intentional or accidental, has become a winning hand in this year's Agent Year (hopefully this time it really is).

The Windows camp, which began pushing AI PCs in 2023, has actually been trying to catch up with the architectural advantage that Apple established when it launched the M1 in 2020. Of course, given the constant bad news Apple has faced regarding AI in 2025, it's possible to close this gap. But Apple won't stop and wait.

This week, Apple launched the M5 Pro and M5 Max, featuring chips with a dual-chip fusion architecture, and specifically named LM Studio as an LLM performance benchmark in its press release.

In the past, Apple didn't talk much about "large language models" in its hardware product launches, especially in the context of on-device inference—but things are different now.

In conclusion

We've raved about Apple all over the place, but let's calm down and ask ourselves a question about the title: Is today's Mac a true AI PC?

iFanr believes that Apple hasn't done enough. To date, we haven't seen a personal computing product that can be called an AI PC, or truly "native AI hardware."

Returning to OpenClaw, the true form of an AI PC is already becoming clear from today's edge-deployed agents.

Meme, AI generated

Meme, AI generated

At the application level, the concept of "applications" geared towards humans may partially regress to a state without graphical interfaces. After all, humans need graphical interfaces, while agents do not. Moreover, you'll find that more and more people are recently becoming accustomed to interaction methods based on dialogue and command lines.

Today, early adopters of agents are finding tools and skills to equip them with; in the future, agents will themselves be pulling new tools and plugins from public code repositories to enhance themselves.

At the system level, the permission system will restructure the working principle of the agent, allowing the agent to directly manipulate various interfaces. At the underlying level, there will be a model orchestration and scheduling mechanism that switches between models as needed based on the task.

Local inference and privacy-preserving cloud inference will form a complete, secure, and privacy-preserving closed loop. Regardless of where the data is transmitted, it is vectorized, encrypted, and stored, and is destroyed immediately upon use…

In other words, a true AI PC should be a system that treats AI as a "first-class citizen" from the very beginning of its design, starting from the ground up.

Meme, AI generated

Meme, AI generated

By this standard, both Mac and Windows are currently in a transitional phase. Mac is closer because the Unix environment, unified hardware, and mature ecosystem were already in place before the era of AI agents arrived. Windows carries a heavier historical baggage, making changes more difficult, and it's still catching up.

But after going around in circles, we still haven't gotten to the most fundamental question: Does a true AI PC really need to be a "PC"?

If we change our perspective, all agent deployment and operation are on the cloud; user-related data, i.e., "context," is also securely and privately stored in the cloud; humans only need a terminal device as a "communicator" and sensors to take photos and record audio to upload the necessary data to the agent, and this device does not even need much edge computing power.

Mac is the best AI PC today, but the "AI PC" of the future may be more like… iPhone?

By Du Chen

#Welcome to follow iFanr's official WeChat account: iFanr (WeChat ID: ifanr), where more exciting content will be presented to you as soon as possible.

ifanr | Original Link · View Comments · Sina Weibo


After the national subsidy, it’s over 3,000 yuan! Apple’s dopamine-inducing MacBook Neo is here! No notch!

Finally, Apple officially released the highly anticipated new "entry-level" MacBook, named "MacBook Neo," with prices starting at 4,599 yuan.

The suffix "Neo" is appearing for the first time in Apple's fifty-year product history. It is originally an English word prefix derived from Greek, meaning "new". MacBook Neo can be simply understood as "MacBook Youth Edition".

This vibrant suffix naturally calls for a refreshing color scheme. The MacBook Neo offers several different color options: yellow, pink, dark blue, and silver.

Based on our observations at the event, the MacBook Neo and MacBook Air are indeed quite similar, except that the screen size is 13 inches, which is slightly smaller than the 13.6-inch MacBook Air. However, the screen does not have a notch, at the cost of thicker screen bezels.

The new MacBook Neo has much more rounded edges and is 1.27 cm thick, thicker than the MacBook Air's 1.13 cm, but weighs the same at 1.23 kg. Overall, it is still very exquisite and compact.

Even at a low price, the MacBook Neo still features an all-metal body, and the build quality feels solid. In the 3000-4000 yuan price range (including national subsidies/educational discounts), it should be considered top-tier.

As leaked, the MacBook Neo is equipped with the A18 Pro processor, but unexpectedly, it has a configuration of 6 CPU cores + 5 GPU cores, one less GPU core than the iPhone 16 Pro.

While the A18 Pro's performance is adequate, Apple has only equipped the MacBook Neo with 8GB of RAM, which will be severely insufficient in 2026. Background processing capabilities will be severely limited, and even opening multiple web pages will be challenging. If you need to run demanding applications, such as 3D modeling and rendering, you should carefully consider this.

Furthermore, the MacBook Neo does not offer an option to upgrade its memory. This is because the A18 Pro's RAM is integrated directly into the SoC, and consumers can only upgrade their storage from 256GB to 512GB by paying extra.

In contrast, the iPhone 17 Pro released last year features an A19 Pro chip and 12 GB of RAM. Although the phone's configuration is higher than that of a computer, considering that the latter is actually only half the price of the former, it can't really be considered a "reversal of the heavens"?

The 8GB memory limit also means that this MacBook Neo can only be used for light office work, entertainment, study, and light creative needs—in a sense, Apple has created a "netbook".

Although it uses a mobile phone chip, the new MacBook Neo runs the full macOS, meaning it's more convenient to use than the more expensive iPad Pro for certain light office tasks. Furthermore, the MacBook Neo is "ready for Apple intelligence."

  • The screen does not support True Tone display and P3 wide color gamut.
  • The MacBook Air features dual side speakers, while it has four speakers.
  • Keyboard backlighting is not supported; fingerprint recognition requires an additional fee for the 512GB configuration.
  • The touchpad is a mechanical structure and is not force-sensitive.
  • Two USB-C ports, one USB3 and one USB2; no MagSafe port.
  • The 3.5mm headphone jack does not support high-impedance headphones.
  • It only supports connecting one external 4K display, with a maximum refresh rate of 60Hz. The MacBook Air, however, can connect to two external 6K displays.
  • The camera is 1080p and doesn't support Center Stage or desktop view functions—but thankfully the screen is notch-less.

Surprisingly, Apple has brought back the mechanical trackpad for the MacBook Neo, making it the first MacBook since 2015 to lack a Force Touch trackpad.

If the MacBook Neo can fully benefit from the 20% national subsidy, its price will drop further to the 3,000 yuan range.

Currently, with the support of national subsidies and education incentives, the M4 MacBook Air still has a market price of around 5,000 yuan on e-commerce platforms, which is only about 1,000 yuan more expensive than the MacBook Neo.

It's worth noting that upgrading just the memory in a Mac product line costs 1000 yuan. Choosing the M4 MacBook Air with 16 GB as the starting point is like paying extra to upgrade the memory while also getting better performance, build quality, ports, and other features, which is really great value.

If you have certain work performance requirements and don't need to take your computer out often, then the Mac mini is definitely a better choice—the computing power of the M chip should not be underestimated.

Therefore, if you're planning to buy a MacBook soon, I would still recommend the M4 MacBook Air.

However, iFanr has exclusively learned that after the launch of the M4 MacBook Air, Apple's official website and other channels will gradually discontinue the 256GB M4 version, making it difficult to buy a brand new machine in the future.

If your work involves simple document processing and email, requires opening only a few web pages, and you prefer traditional keyboard and mouse interaction, then the MacBook Neo is perfectly capable, and it also offers battery life, build quality, and screen quality that Windows laptops in this price range don't have.

For those who want to experience the Mac product line, the MacBook Neo offers an alternative to the Mac mini; this low-priced MacBook, with decent performance and not particularly good for gaming, is also very suitable as a tool for primary and secondary school students for daily study and entertainment.

Similar to the iPhone 17e, the MacBook Neo's main "battleground" is likely overseas, where it will compete with lower-priced Windows and Chromebooks, targeting the education and business markets that require large-scale purchases.

What do you think of the new MacBook Neo? What aspects of the new MacBook Neo would you most like to know? Feel free to tell us in the comments section, and we'll give you our test results.

#Welcome to follow iFanr's official WeChat account: iFanr (WeChat ID: ifanr), where more exciting content will be presented to you as soon as possible.

ifanr | Original Link · View Comments · Sina Weibo