Advertisement

The new Mercedes-Benz EQS features a half-spoke steering wheel, but it has run head-on into the new national standards.

Mercedes is going to make changes to the EQS again.

As a flagship model in the early stages of electrification, the EQS has had a tough few years. Its rounded shape and overly conservative technological pace have made it difficult for consumers accustomed to the traditional S-Class aura to readily open their wallets.

In an attempt to salvage the situation, Mercedes-Benz hastily reintroduced the traditional grille in 2024 and even put the iconic hood ornament back on the front of the car.

Now, the second major redesign is here.

This time, Mercedes-Benz has finally brought out something truly impressive. According to official information, the 2026 EQS will be the first to feature steer-by-wire technology, and it will also come with a new half-spoke steering wheel.

For over a hundred years, a metal steering column has always connected the steering wheel and the front wheels, and the more bumpy the road, the more the steering wheel vibrates.

When you turn the steering wheel once, the wheels obediently turn at a fixed angle. This purely mechanical connection method has dominated the driving experience of the entire automotive industry.

Steering-by-wire completely severs this physical connection. Just like playing a game, the steering wheel becomes a purely electronic input device. Sensors are responsible for capturing your wrist movements, converting the rotation angle and force into electrical signals, which then command the motor on the chassis to move the front wheels.

The most noticeable change brought about by steer-by-wire is the steering ratio.

Anyone who has driven a large sedan knows that maneuvering in narrow underground parking garages or making U-turns at intersections often requires turning the steering wheel several times, even with rear-wheel steering. With a drive-by-wire system, the steering ratio at low speeds is greatly reduced; with just a slight flick of the wrist and a half-turn, the front wheels can reach their maximum steering angle.

On the highway, the system makes the steering "sluggish," so that even if there is a slight tremor in the hand, the car will not veer violently.

It must be said that such characteristics are very suitable for a large luxury sedan like the EQS.

Everyone is bound to worry about what will happen if the electronic system malfunctions. Mercedes' solution is to pile on the hardware. They've included dual signal transmission paths, backup power supplies, and double the number of actuators.

Even if the main system were to fail on the road, the vehicle could still come to a safe stop by using rear-wheel steering and braking on one side of the wheels.

The only concern might be the steering wheel.

The benefits of removing the upper half of the steering wheel are obvious: it makes reading the instrument panel information easier; without the lower half of the curve, legroom becomes more spacious, and getting in and out of the car is less cramped.

Tesla brought the half-spoke steering wheel to the public eye, but at the time, the steering experience of the Model S and Model X wasn't particularly good.

Tesla did not equip the half-spoke steering wheel with steer-by-wire. With a traditional steering ratio and a steering wheel that is missing half, the driver often misses the target when making a U-turn at low speed.

Mercedes-Benz learned from this lesson, and thanks to the addition of steer-by-wire, the driver only needs to hold the steering wheel at the three and nine o'clock positions to complete all operations.

If you want to verify whether the steer-by-wire is any good, you don’t actually need to wait for this facelifted EQS; the NIO ET9 is a ready-made example.

Judging from its performance, steer-by-wire is indeed the solution for full-size sedans. Once you get used to the very small steering angle, driving a behemoth that is more than 5.2 meters long through the streets becomes very easy.

However, if Mercedes wants this system to be successfully implemented in China, the shape of the steering wheel will need to be changed.

In February of this year, the Ministry of Industry and Information Technology released the draft for approval of the mandatory national standard GB 11557-202X "Regulations on Preventing Injuries to Drivers from Automobile Steering Mechanisms", which will be implemented on January 1, 2027.

The key point is that the original definition and adaptation specifications for half-spoke steering wheels have been completely deleted in this new national standard.

In the automotive industry's regulatory and approval system, if a design loses the definition and standard support in national standards, it usually means that it will no longer have a legal and compliant status, and therefore will not be allowed to continue to be used.

Once the new regulations are implemented next year, half-spoke steering wheels with top openings will be banned in the domestic market.

Mercedes could actually take a look at the steering wheel of NIO ET9 – although it also flattens the top and bottom, NIO retains the horizontal connection at the top, so the entire steering wheel is still a complete closed loop, which fits perfectly within the new national standard safety line.

Technology can run wild in the laboratory, but when products are put into use, they must eventually bow to local regulations.

Anyone interested in products with wheels, please follow us. Welcome to discuss. Email: tanjiewen@ifanr.com

#Welcome to follow iFanr's official WeChat account: iFanr (WeChat ID: ifanr), where more exciting content will be presented to you as soon as possible.

Morning Briefing | Foldable iPhone Expected to Launch in Second Half of Year / WeChat Shuts Down “Payment Discounts” Mini Program / Pang Donglai Responds Again to Egg Chlorin Controversy

cover

Reports suggest that Apple's foldable iPhone is already in trial production and is expected to be released in the second half of this year.

Codenamed "Potato," GPT-6 rumored to be released this month.

Luo Fuli discusses Anthropic's ban on "lobster": It's understandable, but OpenClaw's context management is "terrible".

A new term has emerged in the AI ​​world: the fear of being left behind (FOBO).

Xiaomi celebrates its 16th birthday; Wang Teng celebrates.

OpenAI launches security research scholarship

Vibe Coding played a crucial role, leading to a surge in App Store submissions.

Intel executive: Raptor Lake processors remain a core strategic component

Apple has approved drivers for Nvidia and AMD external graphics cards, but they cannot be used for gaming.

Nikon Z9 and classic D5 join forces on the Artemis 2 lunar mission.

PK CEO? Anthropic says the company promotes an open culture.

Former Rockstar engineer: Finding a job is so difficult now, it's like "flipping a coin".

Samsung S27 reportedly adds "Pro" model

Great Wall Motors unveils new car design; Wei Jianjun solicits names online.

OPPO Find X9s Pro Hasselblad Teleconverter "Silver" Design Announced

WeChat's "Payment Discounts" mini-program has announced its shutdown and will officially close at the end of December.

Pang Donglai responds again to the egg custard controversy: All tests passed.

⚠

New overseas phishing scams exposed

Hongguo Short Drama: Continuing to Address Illegal Use of AI-Powered Short Drama Content

Big news

Reports suggest that Apple's foldable iPhone is already in trial production and is expected to be released in the second half of this year.

According to the China Securities Journal, Apple's first foldable iPhone has entered the trial production stage.

The report indicates that Foxconn has begun trial production of Apple's foldable iPhone. According to previous reports from supply chain companies, Apple's shipment guidance to suppliers is for the launch of its first foldable phone in the second half of this year, positioned as a large foldable iPhone.

Bloomberg reporter Mark Gurman also stated that iOS 27 will be compatible with the foldable iPhone Fold.

It is reported that iOS 27 will have a completely new layout on the iPhone Fold: when the device is unfolded, the overall interface will be more like iPadOS, and there may also be an app navigation bar similar to iPadOS on the left.

Previously, research firm Counterpoint pointed out that the foldable phone market will enter a new phase of competition in 2026 as Apple prepares to launch its first foldable iPhone.

Given the high-end positioning of foldable screens, early demand for Apple's foldable iPhone is expected to come primarily from existing iPhone users. Some users considering a book-style foldable screen may also view Apple's upcoming device as an alternative, thus increasing the likelihood of ecosystem migration.

large companies

Codenamed "Potato," GPT-6 rumored to be released this month.

GPT-6, codenamed "Spud," completed its pre-training on March 24 at the Stargate data center in Texas after two years of secret development.

In a podcast interview, OpenAI President Greg Brockman confirmed the existence of the model, stating that it is "not an incremental improvement, but a major shift in how we think about model development." Sam Altman told all employees on the same day that it is "a very powerful model that can truly accelerate economic development."

According to leaked documents, GPT-6 outperforms GPT-5.4 by more than 40% in coding, inference, and AI agent tasks, and its context window has been expanded from 1 million tokens to 2 million.

Brockman also introduced the concept of "Big Model Smell, " emphasizing that the model can proactively align with user intent without repeated prompts. Furthermore, the model supports native unified processing of text, images, audio, and video, and possesses stronger long-range AI agent autonomous operation capabilities.

Regarding pricing, leaks suggest the input price will be approximately $2.50 per million tokens, and the output price will be $12 per million tokens, roughly the same as GPT-5.4. Several leakers claim the new model will be released on April 14th, but this has not yet been officially confirmed.

Less than 24 hours after GPT-6 completed its pre-training, OpenAI announced the shutdown of its video generation product Sora and simultaneously canceled its $1 billion character licensing agreement with Disney.

Altman personally called Disney CEO Bob Iger to apologize. In a recent interview, he explained the reason for the shutdown: "The core issue is computing power; it's always a computing power problem."

 Related reading: GPT-6 exposed, but Ultraman becomes the most anxious person in Silicon Valley.

Luo Fuli discusses Anthropic's ban on "lobster": It's understandable, but OpenClaw's context management is "terrible".

Yesterday, Luo Fuli (@_LuoFuli), head of the Xiaomi MiMo team, posted on X, commenting on Anthropic's recent decision to cut off third-party tool frameworks' access to the Claude subscription service.

It is understood that this move directly impacts AI agent development tools such as "Lobster" OpenClaw that rely on the Claude subscription interface.

Luo Fuli pointed out that Claude Code's subscription mechanism is a "well-designed system for balancing computing power," but after the integration of third-party frameworks, the system has been subjected to far more pressure than expected.

She used OpenClaw as an example to explain in detail the serious flaws in its context management:

When processing a single user request, OpenClaw triggers multiple rounds of low-value tool calls, each carrying a long context window of over 100,000 tokens. Even if the cache is hit, there is significant waste, and in extreme cases, it can increase the cache miss rate of other requests. The actual number of requests is several times that of the native Claude Code framework, and when converted to API pricing, the real cost could be dozens of times the subscription price.

She described the gap as "not a chasm, but a deep pit."

Regarding this ban, Luo Fuli believes that the short-term pain will actually be a positive force. The pressure of third-party frameworks being forced to switch to API-based payment, resulting in a tenfold increase in costs, will force developers to improve context management, increase prompt cache hit rates, and reduce invalid token consumption. "The pain will ultimately transform into engineering discipline."

She also warned other major model manufacturers not to blindly follow the price war before clarifying the pricing model for programming subscription plans.

Selling tokens at low prices while opening up access to third-party frameworks may seem user-friendly, but it's actually a trap—Anthropic has just climbed out of this pit.

She also pointed out that if users continue to use low-quality AI agent frameworks, unstable inference services, and models downgraded to control costs, they will ultimately be unable to complete actual tasks, which creates a vicious cycle for user experience and retention.

 Related reading: A major falling out! Anthropic blocks OpenClaw; the father of lobsters: Persuasion failed.

A new term has emerged in the AI ​​world: the fear of being left behind (FOBO).

According to a report published by Fortune magazine in conjunction with a recent study by MIT, the automation impact of artificial intelligence on the job market is developing gradually and is triggering an increasing "fear of being left behind (FOBO)" among the workforce.

It is understood that FOBO stands for Fear of Becoming Obsolete. Unlike traditional "unemployment anxiety," this emotion is more about "becoming insignificant."

In their latest report, "Crashing Waves vs. Rising Tides," the MIT research team conducted 17,000 manual evaluations of more than 40 cutting-edge large models, including GPT-5, Gemini 2.5 Pro, and DeepSeek R1.

Test results show that AI can currently complete 50% to 75% of text-based tasks with a minimum passing standard. Research indicates that the failure rate of AI tasks is decreasing by half every 2 to 3 years; based on this trend, by 2029, AI is expected to complete most routine text-based tasks with a success rate of 80% to 95%.

The sense of crisis among the working class is reflected in the statistics. KPMG data shows that 40% of employees now consider AI-induced unemployment as a core concern, a figure that has nearly doubled year-on-year.

Joe Depa, EY’s Global Chief Innovation Officer, confirmed this workplace polarization, stating that junior employees within companies have a very high adoption rate of AI tools, while some senior software engineers, due to their resistance to using AI, have seen their actual productivity fall 10 to 20 times behind their AI-enabled peers.

Xiaomi celebrates its 16th birthday; Wang Teng celebrates.

On April 6, 2010, Beijing Xiaomi Technology Co., Ltd. was officially established and moved into Yingu Building.

On August 16th of the same year, the first beta version of MIUI was released; in December, Xiaomi also released "MiTalk," a free instant messaging tool for mobile phones that works with multiple telecommunications operators. At the Xiaomi launch event on August 16th of the following year, the Xiaomi Mi 1, priced at 1999 yuan, was officially released.

Yesterday, Lei Jun, founder of Xiaomi, Lu Weibing, president of the group, Wei Siqi, general manager of Xiaomi China's marketing department, and other senior executives posted birthday wishes. Wang Teng, former general manager of Xiaomi China's marketing department, also reposted Lei Jun's Weibo post and said, "Happy 16th birthday to Xiaomi!"

In addition, Hu Xinxin, product manager of Xiaomi REDMI, revealed in a celebratory video that news about new products will be announced today.

OpenAI launches security research scholarship

OpenAI today announced the launch of the "OpenAI Safety Fellowship" program, open to external researchers, engineers, and practitioners. The program aims to support independent research on security and alignment issues in advanced AI systems and to cultivate the next generation of talent in the field of AI security.

This pilot program will run from September 14 this year to February 5 next year, with priority areas including safety assessment and ethics, robustness and scalability mitigation measures, privacy-preserving security methods, AI agent supervision and high-risk misuse scenarios.

Selected Fellows will work closely with OpenAI's internal mentors and form a learning community with fellow researchers. The program provides office space at Berkeley, sharing space with other Constellation Fellows, and also supports remote participation.

During the project, Fellows are required to complete a substantial research outcome, which may take the form of a paper, benchmark test, or dataset. The project provides a monthly stipend, computing power support, and ongoing mentorship, as well as resources such as API credits, but does not grant access to OpenAI's internal systems.

Applications require a letter of recommendation. The application portal is currently open, with a deadline of May 3rd this year. Admission results will be announced before July 25th.

Vibe Coding played a crucial role, leading to a surge in App Store submissions.

According to a report by The Information, citing data from Sensor Tower:

Apple's App Store saw a surge of 84% year-over-year in new app releases in the first quarter of 2026, primarily driven by the popularity of AI-assisted coding tools (Vibe Coding).

Data shows that the number of new apps added to the global App Store in the first quarter of 2026 reached 235,800, a significant increase of 84% compared to the same period last year. Previously, in 2025, the number of new apps launched had already achieved a 30% increase, totaling nearly 600,000.

Sensor Tower analysts point out that this surge in data coincides closely with the widespread release of proxy-based AI programming tools (such as Anthropic's Claude Code and OpenAI's Codex), which significantly lower the barrier to entry for app creation. Currently, new apps are primarily concentrated in productivity, photo, video, and weather categories.

Apple has denied the surge in app submissions and external concerns about "serious delays in app review."

Apple disclosed that its review team processed an average of over 200,000 app submissions per week over the past 12 weeks, with 90% of the apps completed within 48 hours, and the average review period remaining at 1.5 days.

An Apple spokesperson also confirmed that, in addition to human review, the company is gradually introducing AI tools to help handle the increasing workload of the review process.

Intel executive: Raptor Lake processors remain a core strategic component

According to Club386, Robert Hallock, Intel's Vice President and General Manager of the Enthusiast Channel Business, recently stated in an interview:

Despite Intel's recent release of the latest Arrow Lake Refresh processors, Raptor Lake remains a crucial part of its strategy and will not be phased out in the short term.

Hallock has pledged that Raptor Lake processors will continue to be available in sufficient quantities to meet current user demand.

Regarding the trends in the motherboard market, Hallock points out that some newly released motherboards are beginning to offer both DDR4 and DDR5 memory slots as a bridge for users to transition between memory types.

The report suggests that more small and medium-sized motherboard manufacturers (ODMs) may enter this segment in the future, launching dual-mode hybrid slot motherboards to fill the market gap left by larger manufacturers.

Apple has approved drivers for Nvidia and AMD external graphics cards, but they cannot be used for gaming.

AI startup Tiny Corp recently announced that Apple has officially approved its drivers for external GPUs (eGPUs) developed for AMD and Nvidia, enabling Macs with Apple chips to run AI inference tasks by connecting external graphics cards via Thunderbolt or USB4 ports.

It's worth noting that this driver is not from AMD or Nvidia, but was independently developed by Tiny Corp. It is specifically designed for AI inference and training scenarios and does not support games .

Tiny Corp stated in an article on the X platform that the driver installation process has been greatly simplified.

It's so simple that Qwen can complete the installation and then you can use it to run Qwen.

It is understood that the company completed the first test of eGPU on Apple chips as early as May 2025, but at that time users still needed to disable bypass methods such as system integrity protection for the hardware to work properly.

Nikon Z9 and classic D5 join forces on the Artemis 2 lunar mission.

Nikon professional imaging equipment once again became a key partner in deep space exploration during NASA's Artemis II manned lunar orbit mission, which just began on April 2.

It is understood that this manned lunar mission is the farthest human journey into space in more than half a century, and lays a key foundation for establishing a permanent base on the moon and even exploring Mars in the future.

This mission not only carried the tried-and-tested Nikon D5 DSLR camera, but the astronaut team also successfully secured the Nikon Z9 flagship mirrorless camera to be added to the mission list at the last minute.

Commander Reid Wiseman stated, "The Nikon D5 performs exceptionally well in low-light conditions and, when paired with our telephoto lenses, is ideal for optical observations of the lunar surface, enabling us to capture a large number of outstanding images."

In this mission, the D5 will be responsible for the main photography work, while the Z9 will undergo its first real-world testing in the high-radiation environment of deep space. The inclusion of the Z9 is directly related to NASA's future lunar landing plans.

According to official sources, Nikon is collaborating with NASA to develop a next-generation Handheld Universal Lunar Camera (HULC) based on the Z9. This system is equipped with a custom thermal protection system to cope with the extreme temperatures and space radiation on the moon and will be used in Artemis 3 and subsequent manned lunar missions.

PK CEO? Anthropic says the company promotes an open culture.

According to Business Insider, Anthropic's head of growth confirmed that the company implements a highly decentralized communication mechanism, explicitly encouraging employees to publicly question and debate with CEO Dario Amodei on the collaboration software Slack.

According to reports, Amol Avasare, Anthropic's Head of Growth, revealed in a recent podcast that the company has created a public, personal Slack "notebook" channel for every employee (including key executives). This channel serves as an internal, public information feed, allowing all employees to access work updates across departments and directly participate in discussions.

Amol Avasare pointed out that openly challenging leadership is a standard operating procedure for the company.

According to the case cited, following a recent company-wide meeting, an employee directly challenged their remarks on Dario Amodei's dedicated channel, triggering a widespread internal debate. Company management believes that this type of adversarial communication, bypassing traditional hierarchical structures, directly fostered internal trust.

 Former Rockstar engineer: Finding a job is so difficult now, it's like "flipping a coin".

According to GamesRadar, Rob Carr, a former audio engineer at Rockstar Games, recently spoke publicly about the job-hunting difficulties he faces during the gaming industry downturn in an in-depth interview with Reece Reilly, host of the podcast "Kiwi Talkz".

Carr has worked on the audio development of many top AAA titles such as GTA 5, Red Dead Redemption, and L.A. Noire, accumulating 20 years of experience at Rockstar.

However, he bluntly stated that this qualification is no longer a competitive advantage in today's job market.

Losing your job at some point is unpleasant, but when thousands of people are in the same situation as you, you'll find that your 20 years of professional experience are no longer enough—something that simply wouldn't have happened 5 years ago.

He further added that, facing 35 competitors with similar qualifications and project backgrounds, the interviewer made it clear that there was nothing wrong with his application materials, but there were simply too many competitors with similar qualifications, and whether or not he was ultimately hired was like "flipping a coin" .

New products

Samsung S27 reportedly adds "Pro" model

According to South Korean media ETNews, Samsung Electronics plans to add a "Pro" model to the Galaxy S27 series to be released next year, officially expanding its flagship smartphone product line to a matrix of four models.

Industry insiders revealed that Samsung Mobile Experiences (MX) has developed a plan to break away from the "standard, Plus, Ultra" three-model system that has been in place since the Galaxy S20 series in 2020, and introduce a brand new Pro version in the S27 series.

According to sources, the Pro version is positioned as a high-end model. Although the specific screen size and other detailed specifications have not yet been finalized, it is clear that the exclusive S-Pen stylus function of the Ultra version will be removed, while sharing most of the core technology specifications with the Ultra version.

Another source indicates that Samsung plans to bring the "Privacy Display" and 200-megapixel camera, which received positive market feedback on the Galaxy S26 Ultra this year, to other models.

In the next generation of products, the number of models equipped with this series of high-end display technologies will be expanded from a single Ultra version to two models: Pro and Ultra.

Great Wall Motors unveils new car design; Wei Jianjun solicits names online.

Yesterday, Zhang Fuzhi, Deputy General Manager of Marketing for Great Wall Motors' Haval brand, officially released the exterior outline of the new flagship model.

Judging from the pictures, the new vehicle will adopt a "boxy" rugged off-road design; the headlights will feature a dual-light design, and will also be equipped with lidar and a smart driving blue light; the rear may be equipped with a full-size spare tire.

It is understood that the new car will be built on the latest car manufacturing platform "Guiyuan" and is expected to be positioned in the market above 300,000 yuan.

In addition, Wei Jianjun, chairman of Great Wall Motors, released a video stating that there were very different opinions internally regarding the naming of the new car, and the discussion was very intense. Therefore, he also solicited ideas from netizens on the naming of the new car, "Should it be called 'Haval HX' or 'Haval H10'?"

OPPO Find X9s Pro Hasselblad Teleconverter "Silver" Design Announced

Recently, Qiao Jiadong, Director of Smart Ecosystem Products at OPPO, released a video officially announcing some of the appearance information of the OPPO Find X9s Pro and the new Hasselblad professional teleconverter.

Judging from the released images, the OPPO Find X9s Pro will continue the design of the Find X9 series, adopting a rectangular lens deco in the upper left corner, with the telephoto lens located below the module; while the flash position has changed and adopts a circular design.

The new Hasselblad professional teleconverter will feature a silver-orange lens and an all-metal body. Judging from the video content, the teleconverter may provide about 3x magnification, and when paired with the phone itself, it may offer native optical 11x zoom.

The new phone will be released on April 21, featuring a 200MP ultra-clear main camera and a 200MP ultra-clear telephoto lens; the Find X9 Ultra will also be released at the same time.

New consumption

WeChat's "Payment Discounts" mini-program has announced its shutdown and will officially close at the end of December.

WeChat recently announced that due to business adjustments, the services offered by the "WeChat Pay Discounts" mini-program will be integrated and upgraded into the "WeChat Pay Cash Withdrawal Saves on Every Transaction" mini-program. Users can subsequently claim free withdrawal coupons or obtain benefits through other methods within the new mini-program.

  • Starting from 0:00 on May 11 this year, the original mini-program's gold coin redemption and gifting function will be officially discontinued, but users can still redeem prizes with the gold coins they have already earned.
  • The "WeChat Pay Discounts" mini-program will officially cease operation at 00:00 on December 31st this year. Prior to this, users' previously claimed free withdrawal coupons and discount coupons will still be valid and usable within their expiration period.

Yesterday, WeChat's public relations director, "WeChat Auntie Zhou," posted on Weibo to clarify that the shutdown was not a cancellation of withdrawal benefits, but rather a further upgrade of related services.

She stated that the new mini-program integrates more withdrawal incentive modes, allowing users to receive a free withdrawal quota every day, as well as directly receive coupons. Users can also redeem withdrawal quotas by watching short dramas and playing mini-games.

The "WeChat Pay Discounts" mini-program was launched in 2020. It mainly accumulates gold coins through users' spending behavior and allows users to use gold coins to redeem related benefits such as free cash withdrawal quotas.

Pang Donglai responds again to the egg custard controversy: All tests passed.

Recently, Pang Donglai issued its second statement regarding the controversy surrounding the "Wang Hai Evaluation" egg canthaxanthin contamination. The statement indicated that approximately 150 test reports showed that all fresh eggs from the brands it sells meet all required standards, and the company announced that it will pursue legal action to protect its rights.

Pang Donglai sent fresh eggs from five brands to three authoritative institutions for testing. The reports were all completed on April 3, and the indicators such as heavy metals, veterinary drug residues, additives, and microorganisms all met national standards. At the same time, on-site inspections were carried out at three production plants, and the feed sampling results were also all qualified.

Regarding canthaxanthin, Pang Donglai cited official data stating that the substance is a legal feed additive, with a limit of 8 mg/kg in poultry feed. However, my country currently does not have requirements for the canthaxanthin content in fresh eggs. "Wang Hai Evaluation" has confused feed additive standards with food standards. Pang Donglai stated that it has completed evidence collection and will pursue legal action.

The incident began on March 13th of this year when "Wang Hai Evaluation" released a video claiming that 9.54 mg/kg of canthaxanthin was detected in Pang Donglai eggs, exceeding the feed limit. Pang Donglai issued its first response on March 15th, stating that it would take further action after the official investigation results were released.

New overseas phishing scams exposed

According to a recent report by BleepingComputer, a new type of SMS phishing attack has broken out in multiple locations across the United States, with scammers starting to steal users' privacy and financial information by forging court violation notices with embedded QR codes.

The cyberattacks targeted multiple states, including New York, California, and Texas.

Attackers impersonate local courts or the Department of Motor Vehicles (DMV) to send users a fake "default notice" image, claiming that the recipient has outstanding traffic violations that have entered the enforcement stage, and demanding that they immediately scan the QR code in the image to pay the fine.

After scanning the QR code, users are redirected to an interception page with a CAPTCHA mechanism, and then redirected to a highly fake official payment website (such as ny.gov-skd[.]org, etc.).

This phishing website uniformly demands that users pay a "debt" of $6.99, and in the process steals users' core data such as names, home addresses, phone numbers, and credit card details.

Beautiful

Hongguo Short Drama: Continuing to Address Illegal Use of AI-Powered Short Drama Content

Yesterday, the official account of Hongguo Short Drama released an announcement titled "Announcement on Continuing to Address the Illegal Use of AI Short Drama Materials".

The platform stated that content compliance is a consistent requirement of Hongguo Short Drama for its producers. In order to regulate the creation order of AI short dramas and protect originality and legitimate rights and interests, the platform resolutely cracks down on the illegal use and unauthorized theft of AI short drama materials, and continues to carry out comprehensive content governance work.

It is reported that in the first quarter of this year, Hongguo Short Drama has removed a total of 1,718 comics and dramas that violated the platform's governance regulations, including:

In response to the recent frequent issues of unauthorized use of AI-generated short drama content, the platform has launched a special campaign to address this problem. A comprehensive review of 15,000 works has been completed, and 670 works have been dealt with according to regulations.

Hongguo also released some typical cases: unauthorized use of cartoon characters, unauthorized use of AI brand images, unauthorized use of original game character images, and unauthorized use of actor images.

Hongguo emphasized that for content and producers that commit serious violations and repeatedly violate regulations, the platform will take measures such as removing content from app stores, banning accounts, terminating cooperation, and even pursuing legal action in accordance with laws and regulations.

Invitation Release Poster

According to "Hollywood Watch," a poster has been released for Olivia Wilde's new film "Invitation," which stars Seth Rogen, Penelope Cruz, and Edward Norton.

The film is a remake of the 2020 Spanish film "Sentimental". It tells the story of a couple who have been married for many years and are experiencing marital problems. One day, they happen to invite their upstairs neighbors, who have open views on sexuality, to their home. This night is full of unexpected twists and turns, gradually revealing their repressed emotions and unexplored sexual orientation.

The film will have a limited theatrical release in North America on June 26.

Box office revenue during the 2026 Qingming Festival holiday exceeded 300 million yuan.

According to Lighthouse Pro, as of 7:29 PM on April 6, the total box office revenue for the 2026 Qingming Festival holiday period (April 4-6) has exceeded 300 million yuan.

Among them, the films "Super Mario Galaxy Movie", "I, Permit", "The Rescue Plan", "Genius Game" and "My God" ranked in the top five at the box office during the period.

#Welcome to follow iFanr's official WeChat account: iFanr (WeChat ID: ifanr), where more exciting content will be presented to you as soon as possible.

The first selfie taken by humans since returning to the moon, taken by an iPhone.

Koch made no preparations.

She raised her phone, pointed it at the porthole, and pressed the shutter. Front-facing camera, 18 megapixels, default settings. Her face was in the foreground, with the entire blue Earth suspended in absolute darkness behind her.

The photo was immediately released to the public by NASA and quickly spread around the world. Apple probably never expected that it would receive a historic advertisement for free without spending a penny.

▲ Shot with iPhone 17 Pro Max (Christina Koch)

Some netizens have even compared this photo to Apollo 8's "Earthrise"—taken in 1968, it was the first time humans photographed a colored Earth from the moon and is hailed as one of the most influential photos in history.

The two photos are taken 58 years apart.

On the same day, Commander Reed Wiseman took another photo. He didn't write any explanation, only a short caption: "Indescribable."

These two photos have something in common that many people haven't noticed: they are both selfies. They were taken using the front-facing camera of an iPhone 17 Pro Max, 18 megapixels, with default settings, and then lightly color-corrected in Adobe Lightroom.

53 years later, humans flew to the moon again, this time carrying an iPhone.

How many steps does it take to take a photo in space with an iPhone 17 Pro Max?

The first step, of course, was to send the iPhone into space. On April 1st, US time, Artemis II launched from the Kennedy Space Center. This marked humanity's return to the moon 53 years after Apollo 17 in 1972.

The crew consisted of four people: Commander Reid Wiseman, pilot Victor Glover, mission specialist Christina Koch, and Canadian astronaut Jeremy Hansen.

The four also broke several historical records: Koch was the first woman to go into deep space, Glover was the first Black astronaut to carry out a lunar mission, and Hansen was the first non-American astronaut to fly to the moon.

However, there is one detail that is often overlooked: every astronaut carries an iPhone 17 Pro Max in the calf pocket of their flight suit.

NASA completed the distribution when the crew entered quarantine in March. About four hours after launch, a silver iPhone appeared in the cockpit camera's view, floating from Hansen's hand over Wiseman and Glover's heads and landing in Koch's hand.

This isn't the first time an Apple phone has been sent into space.

In 2011, the last Space Shuttle mission, STS-135, carried two iPhone 4s for an experiment; in 2021, NASA Administrator Jared Isaacman also used an iPhone to take pictures in Earth orbit while directing the SpaceX Inspiration 4 commercial mission.

The Artemis II differs in that:

This marks the first time NASA has officially equipped every lunar astronaut with an iPhone and granted them "fully airworthy" deep space certification. Apple confirmed that this is the first time in iPhone history that it has passed full on-orbit and deep space extended-use certification, and that Apple did not participate in NASA's certification process.

Obtaining this ticket is far more complicated than imagined.

For an iPhone to go into space, it needs to pass four hurdles.

In the deep space environment, one detail speaks volumes: On the first day of the mission, Wiseman reported to Houston that both Outlook instances on his Microsoft Surface Pro tablet had stopped working and requested remote intervention from the ground to troubleshoot the issue.

NASA Flight Director Jude Freeling explained that such malfunctions are not uncommon on space station missions, typically caused by an infinite loop in the authentication logic of the software in an environment without a direct network connection. Once the news broke, it was hard for overseas netizens not to empathize: even flying at 4275 miles per hour in deep space 30,000 miles above Earth, they still couldn't escape the nightmare of Outlook crashing.

After the laughter subsided, this incident actually revealed a more serious reality.

Weightlessness, radiation, confinement, and offline conditions combine to create a near-zero margin for error in the deep space environment. Any minor software or hardware malfunction can escalate into a major problem.

Therefore, every piece of hardware entering the spacecraft must undergo NASA's safety certification. Tobias Niederweiser, a researcher at the BioServe Space Technology Institute at the University of Colorado Boulder, specifically explained this process.

The first phase involves reviewing the entire device. Since the iPhone 17 Pro Max is considered a "flight component," the focus is on verifying the titanium frame, the type of screen adhesive, and the chemistry of the battery pack. It is necessary to confirm that the structure remains intact under extreme acceleration and vibration, and that the materials meet spacecraft flame-retardant and corrosion-resistant standards.

The second phase involves identifying potential hazards in microgravity. In a confined space, shards of broken glass or sapphire lenses could float freely in the air, potentially entering the astronauts' eyes, skin, or even lungs. Other risks include thermal runaway of batteries under the high-energy radiation of deep space, and the release of volatile organic compounds from the fuselage materials in a confined environment, which could interfere with life support systems.

The third phase involves developing a mitigation plan.

Regarding communication interference, NASA permanently blocks cellular, Wi-Fi, and Bluetooth modules through mobile device management policies. For physical protection, regulations stipulate that mobile phones must be stored in flight suit pockets during launch and reentry phases, and for daily use, they must be secured to the cabin wall with Velcro to prevent them from drifting and colliding with precision instruments in weightlessness.

Niederweiser described this level of precision: "In space, every pen and every pen cap has to be secured with Velcro because everything floats."

▲ iPhone 17 Pro Max being put into a spacesuit

The fourth stage is to verify its effectiveness.

The "Ceramic Shield 2" panel on the iPhone 17 Pro Max passed simulated impact tests. Apple claims it's the toughest glass material currently available for smartphones; even under strong impact, cracks are confined to the microscopic level, preventing the formation of numerous floating fragments, thus meeting NASA's V2 9005 guidelines for minimizing mechanical hazard. Because the phone was completely offline during the mission, there was no need to drive the radio frequency chip, and overall heat generation was kept within safe limits.

The iPhone plays a supporting role, but the stories of professional equipment are equally interesting.

The certified iPhone 17 Pro Max is essentially just a highly streamlined imaging device.

With cellular, Wi-Fi, and Bluetooth permanently disabled, astronauts in lunar orbit cannot use social media, send or receive instant messages, or use wireless headphones; FaceTime is out of the question. The only core functions retained by the phone are taking photos and videos.

The logic behind this restriction is irrefutable: the Orion spacecraft's main navigation and communication systems cannot tolerate any external electromagnetic interference, even a consumer-grade mobile phone actively scanning for Wi-Fi signals is a potential hazard.

The astronauts did not let the camera down.

On the second day of the mission, these two images looking back at Earth were created. In his mission feedback, Wiseman emphasized that the iPhone allowed them to show the world life in deep space from a more "common" perspective. The iPhone 17 Pro Max was clearly not the only imaging device in this mission; it played a supporting role in the entire shooting system.

The two Nikon D5 DSLR cameras and one Nikon Z9 mirrorless camera were used to complete the archiving of key scientific images.

▲Shot with Nikon D5

Nikon and NASA have a long-standing partnership, and the Nikon Z9 is currently the main camera on the International Space Station. This flagship mirrorless camera features a 45.71-megapixel stacked CMOS sensor, supports 8K/30p and 4K/120p video recording, and its 4-axis tilting screen allows astronauts to take pictures from various angles even in confined spaces.

In addition, the Nikon camera on Artemis II was equipped with professional-grade Zeiss or Nikon lenses, primarily used to capture high-resolution images of lunar geological features and the spacecraft's external structure. On the fourth day of the mission, the astronauts used the Nikon to photograph the complete outline of the Oriondale Basin on the Moon, marking the first time in human history that the entire region had been observed with the naked eye.

The scientific value of these images is clearly irreplaceable by the iPhone: the high-precision color reproduction, the predictable physical imaging logic, and the raw information without computational photogrammetry processing are all indispensable foundations for geological analysis.

Four GoPro cameras were also installed on the exterior of the spacecraft to film the exterior. Surprisingly, some of the footage was shot with an antique GoPro Hero 4 Black released in 2014. It was a revolutionary product at the time, but it is now considered an archaeological find.

▲ NASA took this photo using a GoPro Hero 4 Black.

One mobile phone, two eras

When discussing the long history of cameras in space, Hasselblad is a name that cannot be ignored.

Since astronaut Wally Sheila carried a Hasselblad 500C on the Mercury 8 mission in 1962, Hasselblad cameras have become synonymous with space photography. During the Apollo 11 moon landing, Armstrong wore a silver Hasselblad data camera equipped with a Zeiss Biogon 60mm lens around his neck, while a separate black, motorized version with a Zeiss Planar 80mm lens was carried inside the lunar module.

To reduce the weight of the return capsule, a total of 12 Hasselblad cameras were left on the lunar surface. Space cameras of that era underwent extensive modifications: unnecessary parts such as leather trim and reflectors were removed, holes were drilled to reduce weight, and custom sights were added; they were thoroughly engineering products.

Compared to the deep customization of the Hasselblad era, the logic for selecting devices today is completely different. The entry of the iPhone 17 Pro Max is a direct response to this gap.

For the past half-century, space photography has followed a closed professional logic: highly customized, rigorously certified, and kept absolutely separate from civilian technology. Now, this barrier is loosening.

▲ Memes created by netizens

With the advent of the iPhone, NASA Administrator Isaacman succinctly summarized the new priorities in one sentence: "We challenged long-established processes to complete airworthiness certification for modern hardware in a faster cycle." The implication of this statement is that the certification cycle itself can also be redesigned.

Koch's selfie looking back at Earth ultimately became the most recognizable image of the mission. It was uncomposed, unlit, and all parameters were set to default.

▲Meme from netizens, image from: @fzlkn

She saw something, felt it was worth keeping, and then took a picture.

This reminds me of the photo of Earth taken by Voyager 1 from 6.4 billion kilometers away in 1990—a dim blue dot, like a speck of dust.

Astronomer Carl Sagan once wrote the following:

"Look again at that tiny glimmer of light—"

Where this body stands, where this heart belongs, and to whom all beings are bound.

Those you love, those you know, those you've heard of, and all those who have lived throughout history—all have spent their lives on this speck of dust.

All our joys and sorrows, countless arrogant beliefs, ideologies and economic dogmas; every hunter and gatherer, hero and coward, creator and destroyer of civilization, emperor and farmer, lovers in the throes of passion, mother and father, hopeful child, inventor and explorer, virtuous teacher, "superstar" and "leader"…

Every saint and sinner in human history has lived here.

A speck of dust suspended in a ray of sunlight.

Thirty-six years later, Koch used an iPhone to photograph the blue planet from lunar orbit.

This time, the speck of dust was put into a pocket, taken to lunar orbit, and photographed by a mobile phone that anyone could buy.

This photo, taken on my iPhone, shows our shared home.

#Welcome to follow iFanr's official WeChat account: iFanr (WeChat ID: ifanr), where more exciting content will be presented to you as soon as possible.

Interview with Xu Chi of XREAL: I have a mobile phone and a computer, why would I want to replace them with AI glasses? | Diversity Company

Editor's Note

When we want a Coke, we usually only have two choices: Pepsi and Coca-Cola. When choosing a mobile phone, there's a 90% chance we'll be switching between Apple and brands like Huawei, Xiaomi, OPPO, and Vivo. When buying sportswear, the first brands that come to mind are most likely Nike and Adidas.

But the world is so colorful because, beyond these giants, there are companies that defy tradition, strive to create something different, focus on design and functionality, and look to tomorrow.

They have unconventional business models, and their designs and products offer unique user value and ample social talking points. Crucially, they are not burdened by the constraints of large corporations and dare to advance recklessly. They are "diversity companies."

Diversity is key to an open world. ifanr believes that only companies that truly value and understand diversity can foresee the future sooner than most. In this column of the same name, ifanr will share exclusive interviews with these diverse companies, witnessing how they are reshaping the future and defining the new normal.

This is the 10th article in the "Diversity Companies" column.

In 2017, Xu Chi resigned from Magic Leap and returned to China to found XREAL (formerly Nreal). At that time, the entire XR industry was celebrating Magic Leap's whale flip demonstration, and everyone thought that this was the future, but no one actually sold a single consumer-grade AR glasses.

Nine years have passed, and this field has experienced the bubble and ebb of the VR metaverse, the high-profile entry and lukewarm reception of Apple Vision Pro, the subsidized expansion of Meta Ray-Ban, and the re-examination of all terminal forms by the AI ​​wave.

XREAL survived and became one of the first hardware strategic partners for Google's Android XR platform. According to IDC data, XREAL has maintained its position as the world's number one AR glasses market share for four consecutive years. And just recently, XREAL officially submitted its listing application to the Hong Kong Stock Exchange.

This smart glasses company, which has been dormant for nearly a decade, is about to enter a new phase of business.

This exclusive interview with iFanr was conducted before XREAL filed for its IPO on the Hong Kong Stock Exchange. In the conversation, Xu Chi did not shy away from any pointed questions—from "Why Apple's Vision Pro was destined to have problems," to "Chinese manufacturers are using supply chain integration to fight a first half of the war," and then to "No company in the eyewear industry has ever truly made money." But throughout, a clear judgment was consistently maintained:

Glasses are the best medium for AI because only they can provide the model with the highest quality context.

Xu Chi, Founder and CEO of XREAL

Surviving in an industry where no company makes money

Q: When you left Magic Leap to start your own business, you were working on a very cutting-edge product. Why did you decide to start such a company?

A: When I was at Magic Leap, the first few months were amazing. Suddenly, you're standing at the beginning of a new era and have the opportunity to be at the forefront of it. If you're lucky, you can even participate in defining it. That feeling is fantastic.

At the time, my judgment was that this was the next big opportunity, and it would definitely materialize by 2020. I came back in 2016 because I felt that if I didn't return soon, it would be too late. My thought at the time was that someone knowledgeable in this industry would definitely come back from abroad—like Robin Li and Charles Zhang back then. That person could be me, or it could be one of my colleagues, because there were only so many people who understood this area at the time. So why not you? You can't come back fully prepared. If things don't go well, I'll just go back. That was my simple thought at the time.

It's been a tough journey, with the industry experiencing ups and downs. But I've always stuck to one thing: we've never strayed from our original intentions. This is also a test of what each entrepreneur's inner drive truly is—is it for fame, for success, or for wealth?

We genuinely believe that eyeglasses are the next big thing, and that shouldn't be easy. Coincidentally, we entered this industry very early, almost with a sense of mission, eager to see what the final answer would be, and even wanting to stay with the industry until that answer is revealed.

Q: XREAL just celebrated its ninth anniversary earlier this year. In nine years, have you met your expectations?

A: First of all, it definitely didn't meet expectations; the whole industry doesn't meet expectations, but I'm still quite satisfied.

Given our understanding and enthusiasm at the time, we were quite lucky to get to where we are today. Along the way, we met many good people, kind-hearted individuals, upstream and downstream partners, and our own team, which is why we are where we are today.

Of course, if we were to go through it all again with today's mindset, we would definitely do it even better. That's the process of growth. I often tell my colleagues that if XR had a museum that recorded every step of XR's history, XREAL would have already left its own significant mark.

Q: Industry trends are constantly changing. Have you ever experienced your darkest moments? How did you overcome them?

A: Definitely.

Before an industry truly takes off, every darkest moment is often accompanied by some shining moments. The most memorable was probably when the pandemic first emerged. At that time, our overseas business was at its best because people needed such products while staying at home, and all overseas operators wanted to cooperate with us. Our CES debut was a great success.

But then the pandemic hit, people couldn't leave the country, financing was disrupted, the team was unstable, and internal and external conflicts erupted all at once. Internally, there were strategy and management disputes, and externally, some companies that were doing well suddenly dropped their partnerships.

Looking back now, I feel much more at ease because these were all processes that should have been taken for granted.

Q: In my opinion, Vision Pro replicated Magic Leap's technology, and even surpassed it. But Vision Pro didn't meet expectations. Was that a blow to you at the time?

A: We were actually quite disappointed at the time. I remember very clearly that once when I went to meet Mr. Xing from Meituan, he was also paying attention to this field. After we finished talking, he asked, "What is Apple doing?" I said at the time, "The product that Apple is making is probably not going to work."

At the time, many people in China believed that "everything Apple does has a reason," and you couldn't convince them. It's difficult to use a product that hasn't even been released yet to provide evidence. Later, if you wanted to say that Apple did something wrong, you'd attract criticism.

We can only go with the flow. But actually, I've felt there was something wrong with this Apple product for quite some time now.

Q: What is the reason?

A: I think this is the first Apple product ever that wasn't cut.

When Steve Jobs was at Apple, it was all about extreme tailoring—"I don't know what you want, I give you what you want." But the Vision Pro is clearly "I don't know what you want, so I give you everything"—it adds this and that, it's a product that piles on features.

It's said that this is indeed Apple's internal product logic. They are repeating the path taken by the Apple Watch—the first-generation Watch wasn't successful, but it gave them the opportunity for subsequent success, showing them that focusing on health monitoring and exercise was the right direction.

The initial idea behind AVP was also to avoid making judgments and try to add as many features as possible to see what users would prefer. However, their mistake was that adding too many features to the headset made it too heavy and uncomfortable to wear.

As a result, the first-generation product did not provide Apple with any feedback on "which direction the next generation should take," because the sample size was too small. Therefore, they will likely be more conservative in their next step.

Q: Your main products currently being shipped are actually large-screen mobile devices. When did you decide to forgo spatial calculations and instead focus on developing the large-screen mobile devices first? Why do you believe that this positioning of large-screen mobile devices is the right one?

A: This wasn't something I judged; it was something I was proven wrong by. Our current situation is truly a result of our journey. Exploring uncharted territory, genuine user feedback is extremely important.

Our first-generation product was designed to be smaller, cheaper, and better. The idea at the time was to partner with telecom operators, who had local influence, brand endorsement, channels, and ecosystems, while we provided the technology, handling both the hardware and software.

We once created what we considered the most complete commercialization loop in South Korea: pre-installed apps on phones, glasses bundled with phones, 5G contracts to drive down prices, sales through carriers and Samsung/LG channels, and LG finding local content to build an ecosystem. This is the most complete ecosystem we've seen so far, but it wasn't successful—because neither we nor the carriers had real platform appeal.

Only then will you start to reflect: who is truly capable of building a platform?

I'm making a bold prediction: only Apple and Google will be included. Not even Meta, not even OpenAI.

Because of their momentum and accumulation in the mobile phone ecosystem over the past 20 years, they were the only ones capable of building a platform. At that time, my thinking was very simple—Don't do it.

Because if one day you develop a system, and Google releases a completely different system, you've essentially misled all your developers. What if the interaction logic is completely different?

So we have to go back and simplify. We come from a technical background, and cutting back in technical fields is the most painful. You have to tell people doing SLAM, "Sorry, we used to do six degrees of freedom, now we have to do three." You might say, "Anyone can do three degrees of freedom, right?" But there's no other way.

However, our original intention remained unchanged—although we focused on display technology, another line of thought never went unbroken. Until Google found us.

Q: How did you manage to get this collaboration with Google?

A: We've always maintained open-source connections with Google. They've been keeping an eye on us internally, including some Apple executives, who buy our new products as soon as they're released. The attention from your peers is probably the greatest recognition you can receive.

Until Apple released AVP, Google immediately made a decision to follow suit. However, they suddenly discovered that AVP was unsuccessful. There were two major takeaways to its failure: it was too expensive and too heavy. Because it was expensive, developers weren't interested, believing there wouldn't be enough market share within three to five years. Because it was too heavy, consumers had no intention of wearing it long-term or continuously.

The real solution lies in making it affordable and lightweight. XREAL has been focusing on lightweight design and modular construction from day one. Leveraging our long-term accumulation of core technologies in space computing and our excellent domestic supply chain capabilities, we are also more competitive in terms of price. Thus, this matter became a natural progression.

XREAL's Android XR glasses project, Aura, in collaboration with Google.

Glasses are the best medium for AI.

Q: Whether it's spatial computing devices or AI hardware, what should the ultimate form of smart glasses be? Some people in the industry have mentioned a division from L1 to L5, do you agree with that? Because in the field of glasses, the current L1 experience is far better than L5, which is quite strange.

A: I previously gave a definition from L1 to L5, mainly a classification of intelligence levels—in the early stages, they could be used occasionally, but later they became more and more like your own personal assistant. But why are lightweight glasses destined not to replace everything? Because of the physical limitations of display and computing power.

If you want to add a display, the most common approach now is optical waveguides. However, even at its best, the display capabilities of optical waveguides are only comparable to those of in-car head-up displays (HUDs). It's fine for translation and navigation, but you wouldn't use an in-car HUD for watching movies or playing games. Furthermore, we've become spoiled by Retina displays—while Retina displays are the foundation of a display, they also require a lot of GPUs to render more pixels. If this were done on a lightweight, always-on device, the battery life wouldn't be sufficient.

Therefore, we must make trade-offs: there is a lighter device that can be worn all day, but with a weaker display; and there is a relatively heavier device, but with a portable form factor and display capabilities on par with today's Retina displays. These two are inherently separate.

Q: So you think that in the future, a pair of glasses won't solve all problems?

A: When people think of glasses, they might think of different forms. The Meta Ray-Ban is one form, what we're working on now is another, and the large helmet is yet another. It's not a matter of choosing one of three. Just like today you have a mobile phone, tablet, laptop, and desktop computer, they meet different scenarios and have different priorities.

AI glasses are meant to be worn all day, so they must be lightweight. The second form is our current mobile form, which is portable rather than always worn. The advantage is that it can be slightly heavier, but it can be worn during work and displays richer content. On the other side is a large helmet, including AVP, which offers an absolutely fantastic experience, but it might be more like a dedicated home device.

We believe these three forms will coexist for the next 10 years or even longer, and no single device will replace everything. Just like in science fiction movies where watches were envisioned replacing phones, unfortunately, we still wear both watches and phones today. Some things have physical boundaries.

Q: I have a mobile phone and a computer, why do I need to use glasses to replace them?

A: I used to think that today's computers and mobile phones have compressed the entire world of internet information into a two-dimensional rectangular grid. True three-dimensional perception, three-dimensional display, and the fusion of the virtual and real worlds are inevitable. But recently I've had a new thought—maybe that alone isn't strong enough, not enough to make users feel, "I have to do this."

This is the new answer we've come up with after more than a year of reflection: we should thank AI, as it may bring us a completely new way of interacting. In the past, whether it was a computer or a mobile phone, it was essentially a human controlling a machine. Keyboards are efficient but have a high learning curve, while touchscreens are relatively efficient and have a low learning curve, but they still haven't escaped the paradigm of "human controlling machine." Apple uses eye tracking for 3D interaction on AVP, which is extremely inefficient; it's essentially interacting on a 3D canvas.

When AI emerged, it was a revelation. The true next generation of interaction will no longer be about humans controlling machines, but about humans communicating efficiently with an intelligent agent, just like we do now. In the future, your phone, computer, and glasses will all have an intelligent agent, communicating through the five senses in a way that connects people.

Q: Many AI hardware devices nowadays, such as headphones and pendants with cameras, also serve as AI input. How do you view the competition with these devices? They are cheaper and have even wider application scenarios.

A: Let's go back to first principles. Why are glasses the best platform for AI? Because when you add eye tracking in the future, glasses may be the only device that can know your focus point.

For example, whether it's headphones or other devices, if they want to take a picture and analyze it—for example, if there are three people sitting in front of you, who are you looking at?—uploading the entire picture involves a huge amount of computation. But with eye tracking, I can detect that you're looking at a specific person; I can even crop out their silhouette and only upload that one image to the cloud. Humans are naturally like this too; when I'm focused on talking to you, I might only notice your facial expressions and not pay attention to the trees behind you. Only glasses can do these things.

Essentially, this is very similar to the principle of LLM—the attention mechanism. Glasses are the easiest terminal to obtain the highest quality context.

Q: I tried Project Aura yesterday, and I felt that with a truly usable display, many productivity scenarios become possible with the help of the AI ​​Agent. For example, I can do without a computer—as long as I can give commands, clearly receive output results, and determine whether the Agent's delivery meets expectations, that's enough.

A: You've made an excellent point. Now imagine you're the CEO of a company, and the AI ​​Agents are the various employees. How can you make these employees understand your instructions more and more accurately?

It's not about you paraphrasing in words—because words might distort some background information—but rather that the person involved has been involved in many of the scenarios in your work. When you repeat an idea to them, they might say, "Oh, you came up with that idea in that scenario, you mentioned it while chatting with someone," because they have more background information and are more likely to complete the task more accurately.

Therefore, I need to elevate the input of the AI ​​Agent, to turn it into a contextual input, rather than just abstract text.

Project Aura

Q: If you were to develop AI glasses in the future, what would you like them to look like?

A: I hope it can truly provide me with insights from a third-party perspective, insights I might not have noticed myself. I'm still looking at it from the perspective of a personal assistant. I hope it can help me review my work at the end of the day, offering angles and things I hadn't considered from a first-person perspective. Therefore, it needs to be available 24/7 and multimodal.

Q: Doesn't this contradict your current direction in developing displays? Your technical expertise is more focused on displays, but the scenario you just mentioned seems to be possible without a display.

A: What XREAL does well today is that when we solve a problem, we go back to first principles and then use a more difficult approach to solve it. Just like we don't make chips for displays. Like why Tesla can build cars—someone who used to work in payments can build cars? Why can someone who makes cars build rockets? It's not because "this is the closest thing, so I'll do it." What's impressive is that they consistently follow first principles—how to solve a problem using a seemingly complex, but actually the most direct, method.

Q: In your opinion, what are the first principles of XREAL?

A: A multimodal, all-weather AI device—with at least eight hours of battery life, plus long-term memory—is a highly monetizable AI personal assistant.

Our core goal is to create an AI personal assistant. The question is whether to prioritize 24/7 operation, display output, or multimodal functionality. Each step is a necessary step towards becoming the ultimate personal assistant. This idea truly took shape after multimodal AI matured. Multimodal AI expanded the boundaries of what we believe are the capabilities of this field. My initial vision was for a smaller, lighter, and cheaper device.

Long-termism in the Chaotic Era

Q: What do you think is the core value of smart glasses?

A: The core value of glasses lies in the fact that they represent the best form of sharing high-quality context and attention with the model. Today's context is similar to a CPU cache, a kind of short-term memory. Long-term memory, on the other hand, is a completely new memory system. This will emerge within the next two to three years, and it's something that everyone in the agent field has been researching.

Q: Is this an industry consensus, or are many people who make eyeglasses just interested in making eyeglasses?

A: When the iPhone came out in 2007, it wasn't a consensus. We've actually entered a chaotic era today. Just like back then, no one could predetermine the answer; we only say Musk was brilliant and Jobs was amazing in hindsight. But that period was a chaotic era to some extent; everyone was searching for the answer.

But what I want to say is that when this industry is about disruptive innovation, it's unlikely that a situation like in a martial arts novel will occur where a mysterious master suddenly appears and wipes out everyone. This industry places great emphasis on research and development. The ultimate product in this chaotic era, the iPhone Moment, is very likely not something that happens in the middle of the supply chain.

Q: Many domestic manufacturers have already entered the 1000-yuan price range, and there are more and more noisy products on the market. How do you maintain your brand image? What is the fundamental difference between you and companies that integrate the supply chain?

A: If we keep emphasizing originality, but actually can't outsell companies with integrated supply chains, then it probably means that our original elements lack differentiation. I believe our products are differentiated, but difficult things take time.

Since XREAL's successful display glasses business in 2022, I've been constantly thinking about where our brand should be positioned. We aspire to produce mid-to-high-end products, and brand recognition needs time to develop. Time is the biggest enemy for startups, and we must be patient.

JK from Insta360 once said: "A brand is the trust that consumers place in you when they don't have enough information."

We especially value this trust. You might need several generations of products to build it, but just one bad product can destroy it. So in this process, we are no longer just pursuing high-speed growth, but high-quality growth.

For years, what we've been doing is ensuring we lead the industry in changing user experiences: chips, wide-viewing-angle optical engines, and real-time 2D to 3D conversion. I believe these will gradually solidify in consumers' minds. Naturally, some will try to take shortcuts through marketing, attempting to create an impression of "I'm similar to you," but I believe time will prove everything.

Q: The AI ​​industry is changing almost daily this year. As a hardware entrepreneur, do you feel anxious?

A: The logic is the same as stock trading. If you're always in the market, watching the fluctuations every day, it's easy for short-term volatility to affect your judgment and mood. If you take a long-term view and broaden your perspective, you might have a clearer picture.

The core challenge is testing your long-term strategic resolve. Before DeepSeek's meteoric rise, the names people in China heard of were Kimi and Doubao. DeepSeek didn't choose to advertise alongside other companies at that time; instead, it quietly focused on its own development until one day, overseas observers discovered it had even shaken Nvidia's stock price. We probably feel that doing the same thing is more appropriate for us.

Our previous foundation gives us some leeway to wait. Many companies today are forced to release glasses, to create glasses only in presentations, because they need to survive until the next stage—just like when they were building cars, everyone was still making glasses based on PowerPoint presentations. But I think it's good that we can take a step back and think more long-term.

Q: Google did a lot of promotion at CES, but didn't launch any products. Are you worried that your platform's development is too slow? Will your product compete with Google's?

A: Actually, Google's CES event is a small-scale, closed-door invitation event. They invited many people to attend, including us, who spent half a day in their meeting room meeting different partners. I'm not afraid of them being slow; I'm afraid of them being fast. Because a platform needs a rhythm; it's not enough for just the platform to be launched. There also needs to be key content and an ecosystem. We are very satisfied with the current situation.

Furthermore, I feel that there's a bit of a rush in AI development in China today. Everyone seems to be racing, feeling that if they're six months late, they'll miss out. But I don't think defining this next-generation interaction paradigm in AI is about getting a head start; it's a long-distance race, and getting on the right track is far more important than rushing ahead.

Google will do what it did with Android. I believe that at some point they will have their own Pixel, but they will definitely focus on building the platform first. This is our very clear strategy. So we are not worried about competition in the short term. They may be our best partner—they do what we can't, and what we do happen to be what they need most.

Q: Glasses will most likely go through a process similar to that of mobile phones and new energy vehicles, going from the first half to the second half. Where do you think we are now?

A: The eyewear market will likely follow a similar path to smartphones and new energy vehicles: leading manufacturers will continuously invest in R&D, achieve breakthroughs, rapidly iterate on their products, and establish industry rules. Then, downstream players in the supply chain will reduce costs and empower more manufacturers. Most Chinese manufacturers are familiar with the latter half – making small iterations, incremental innovations, and large-scale manufacturing on products already defined by others. But the eyewear industry hasn't even reached that second half yet.

What I least want to see is this industry using supply chain integration and marketing to fight a battle in the first half.

Because the first half of the game still requires technological innovation and iteration. Personally, I don't think any of today's products have reached the wow factor of the original iPhone 1. And that iPhone Moment is unlikely to have come from a fourth-rate company that only integrates the supply chain.

While eyewear is currently booming, no Chinese eyewear manufacturer has yet achieved sales of over one million units for a single product. Globally, only Meta stands out, but Meta relies on subsidies. The true turning point for this industry will be assessed without subsidies.

Hand-drawn posters from XREAL users

Q: Is your ultimate business model still selling hardware?

A: Of course not. Even today, model manufacturers haven't figured out their business model. What you really want to ask is, when a new terminal, a new interaction paradigm leading to a new terminal, emerges, what will the value chain distribution look like?

I believe we will definitely have a place. And because the device side is getting closer and closer to you, the attributes of the hardware or entry point side will become stronger and stronger. In the future, you may not be buying hardware, but rather paying a monthly subscription fee to have this assistant serve you.

If this assistant has been with you for three years, attended almost all of your meetings, and not just recorded data, but formed its own judgments and abstract long-term memories as if attending meetings, then you won't be able to do without it.

Q: Who owns the data? What does this mean for the future value chain?

A: There has always been a question in this industry: who owns the data?

Today, Samsung is giving its data directly to Google, using your data to monetize through advertising. But the data ownership originally belongs to the user. Furthermore, long-term memory will be decoupled from AI—just as CPU and memory can be decoupled.

When you have a large number of devices, you have more control over who you choose to give your data to.

Q: When Android XR or multimodal AI matures, all the major manufacturers will enter the market, leaving little time for startups?

A: You understand, right? Just like when we were making phones alongside Android, all the hardware manufacturers came in. You moved from one table to another, and everyone's chips changed. Time may be running out for startups, so maintaining differentiation and a fast iteration pace is crucial.

Everyone says they want to be like Apple, but Apple's greatest achievement is solving three problems: hardware manufacturing, system development, and how to connect the hardware and software in a complete interaction paradigm.

But many people might only associate it with Lenovo, or even Oracle. Different levels have different roles and earn different incomes. As long as I can secure a place in this ecosystem, that's enough, but it's too early to talk about a specific position now.

#Welcome to follow iFanr's official WeChat account: iFanr (WeChat ID: ifanr), where more exciting content will be presented to you as soon as possible.

Lenovo Tianxi Claw Product Experience: A good lobster is one that everyone can eat.

Although 2026 is the Year of the Horse, if we really had to choose an "animal of the year," it would have to be the lobster.

Just like lobsters in the sea, OpenClaw is delicious, but there is still a considerable barrier to entry to cook and enjoy it well: not only do you need to configure the environment, set up the interface, and find the model, but you also need to set up a computer that is on standby 24 hours a day to serve as a "cage" for it.

In addition, considering the time, effort, and money required to configure skills, make fine-tuning adjustments, and integrate with MCP, it is actually difficult for ordinary users to make OpenClaw achieve the meaning of "significantly improving efficiency".

Just then, Lenovo, a long-established PC manufacturer, stepped forward and did something very inspiring.

They used Lenovo computers, tablets, and Motorola phones as platforms to create a Claw service that was installation-free, cross-platform, and available 24/7.

In other words: to make Claw enjoyable for everyone .

On March 18th, Lenovo officially announced the launch of its own intelligent assistant, "Tianxi Personal Intelligent Agent" Claw, during its spring product launch livestream, and officially began internal testing on March 30th.

▲ Image | Lenovo

iFanr also participated in this Tianxi Claw testing activity. After using it for a period of time, we believe that Tianxi Claw is one of the Claw deployment solutions with the lowest learning curve, the strongest cross-device capabilities, and the best hardware and software integration at present.

Lenovo takes care of the "last mile" of lobster delivery.

Previously, installing a working Lobster on Windows was a very troublesome task.

Because OpenClaw heavily relies on Unix-like environments, and many of its underlying scripts are written based on Darwin or Linux, the hassle of tinkering with WSL2 alone is enough to deter 90% of early adopters from getting a usable Lobster Assistant on Windows.

For Lenovo's Tianxi Claw, none of the above are problems.

Unlike purely local manual deployment, Tianxi Claw adopts an "end-to-cloud hybrid" operation logic.

As a software and hardware supplier, Lenovo can directly reserve calling interfaces for its own hardware, and then run Tianxi Claw on the server side in a form similar to a "cloud computer".

In this way, users not only save the trouble of manual deployment and gain the security of file isolation, but also enjoy the hardware control capabilities brought by a purely local Lobster.

With just one Lenovo account, anyone can run Tianxi Claw simultaneously on computers, tablets, and mobile phones, achieving true "zero-cost deployment" and "out-of-the-box usability".

Taking our experience during the beta testing phase as an example, running Tianxi Claw on this Lenovo YOGA Air 14 only requires four steps: powering on, opening the Tianxi app, logging into your Lenovo account, and starting to use it.

Meanwhile, Claw, as a "pre-installed" lobster assistant, has the biggest advantage of coming pre-installed with a large number of useful skills, eliminating the need to download individual compressed packages from ClawHub.

In the current test version, Tianxi Claw already supports more than ten skills, including document processing, design creation, hardware control, entertainment, and technical tools.

Whether it's asking Tianxi to edit a picture, draft a weekly report, write a heartfelt leave application, or directly prepare a roadshow PPT, it's all just a matter of a single sentence:

In addition to these pre-installed skills, you can of course add other skills to Tianxi Claw yourself.

For example, my frequently used advanced search tool, web-search-plus, can be installed automatically simply by sending the ClawHub project URL to Tianxi Claw.

Even disregarding the aforementioned third-party skills, Tianxi Claw, as a pre-installed lobster assistant on computers, already has enough built-in skills to meet most daily needs.

Taking the editing workflow as an example, I can use natural language to set up a recurring task for Tianxi Claw, which will retrieve and organize the hottest tech news from the past 12 hours every morning, generate a summary, and export it locally:

During work, I can directly chat with Tianxi Claw to have it help me search for information, organize the knowledge base, and modify images, as well as, most importantly for me, find local files.

At this point, Tianxi Claw can fully leverage its advantages as a hybrid client-cloud application : I don't need to stare at a dull cmd window waiting for a reply, and my computer fan won't be running during the process; I can simply wait comfortably for its feedback.

The first cross-platform lobster that requires no configuration

Another issue with OpenClaw is how to conduct command and control across different platforms and networks.

As you know, OpenClaw was originally designed as a purely localized agent tool. If your use case is limited to your own home space, then you don't need to worry about control issues at all.

However, if you want to control your Lobster even when you're out, especially when it involves using MCP and managing the Lobster backend, you have to set up intranet penetration or a virtual proxy for your home network, which is quite cumbersome.

Tianxi Claw has provided a fairly perfect solution to this problem: it has given the lobster a new layer of network client shell, and all instructions are uploaded and sent through the "Tianxi Personal Super Intelligent Agent" app.

In this way, as long as your computer is on and connected to the internet, you can continue performing Tianxi Claw operations on your computer from thousands of miles away simply by using your mobile phone or tablet:

And this capability isn't one-way—I can use this Moto Razr 60 to remotely manage files on my Lenovo laptop, and naturally, I can also use the laptop to manage my phone and tablet.

At this point, we must mention Tianxi Claw's exclusive skill: Since Tianxi Claw is a first-party application of Lenovo, Lenovo has specially configured a set of exclusive skills for it that can directly call hardware functions .

For example, you can send a message from your phone to the Tianxi Claw app on your computer, asking it to set the computer to dark mode and adjust the screen brightness to the lowest setting. The computer can then execute the following commands directly:

Similarly, if you tell Tianxi Claw on your computer to set an alarm on the tablet to feed the cat at 7 a.m. tomorrow, it can create the alarm directly on the tablet without needing to unlock it.

All of the above can be achieved with just one Lenovo account.

This cross-platform, cross-device Lobster not only eliminates the need for separate, cumbersome configurations for Windows and Android, but also enables silent background operations when the screen is off . The convenience it unlocks for ordinary users is simply unimaginable.

Minor issues don't affect Tianxi's status as the "most convenient to eat" lobster.

It should be noted that Lenovo Tianxi Claw is still in the testing phase, and the various functions we experienced are not yet "fully implemented".

During our brief testing, we also noticed some issues with Tianxi Claw – after all, its operating mechanism is not as perfect as a locally deployed OpenClaw.

First, while you can request Tianxi Claw to install the skill by sending it a link, adding the API key required for certain skills is quite troublesome.

The reason is simple: Tianxi Claw runs in a virtual container on a Lenovo server , and the app we are talking to is just a chat window.

If the skill you are using requires adding an API key by modifying the .json file in the root directory, then there is currently no way to modify it:

Secondly, the current beta version of Tianxi Claw does not support changing the basic dialogue model, nor can its "personality" and dialogue style be directly modified or adjusted.

In other words, if you want your lobster to speak "more humanly" or maintain a certain pre-defined role, Tianxi Claw cannot do that at present; it can only make its dialogue style have some tendencies by adding knowledge bases.

Finally, and most importantly, is Tianxi Claw's memory mechanism and token pricing rules.

Due to the limitations of its operating model, Tianxi Claw in the testing phase cannot handle very long context sessions and will automatically perform /compress to compress early information, so it may sometimes forget things you told it before.

If you want Tianxi Claw to remember certain information long-term, you must command it to "remember this," thus semi-manually saving the information to the current account's knowledge base.

Meanwhile, Lenovo has not yet announced the pricing system for Tianxi Claw's tokens, and whether users will be able to choose their own base model in the future remains to be discussed.

Based on the pricing standards of other existing OpenClaw managed services, we speculate that Tianxi Claw may adopt a "limited-time free, subsequent subscription" mechanism, and may also be integrated into the subscription of other Lenovo cloud services.

In summary, Tianxi Claw is a rare and truly "barrier-free, accessible to everyone" lobster solution.

Users don't need to tinker with Linux containers themselves, don't need to spend 800 yuan on Xianyu to install and uninstall them, and don't need to worry about Tianxi Claw accidentally deleting your files—it will ask you before performing any deletion action, and all the data called is stored in the sandbox of the Tianxi app.

When it comes to the basics of OpenClaw, we often describe it as Tony Stark's Jarvis—but the real challenge is never about communicating with Jarvis, but rather how to get it to access and control that hardware.

Tianxi Claw is just like Jarvis with its own armor: it can talk, control, and make decisions on its own, and we just need to give orders directly like Tony.

#Welcome to follow iFanr's official WeChat account: iFanr (WeChat ID: ifanr), where more exciting content will be presented to you as soon as possible.