Advertisement

OnePlus Ace 6 Ultra Review: Combining a Phone and a Handheld Console

On April 28, OnePlus released the third member of the Ace 6 series, a performance model that maximizes the gaming experience – the OnePlus Ace 6 Ultra, priced from 3799 yuan, and from 3499 yuan after national subsidies.

In terms of appearance, the OnePlus Ace 6 Ultra continues the design pattern of the Ace 6 series, and is available in two colors: "Ace Awakening" with a black and purple combination and "Metallic Storm" with a titanium metallic finish.

The "Metal Storm" color scheme features a brand-new "titanium alloy AG glass" back cover, giving it a smooth, silky feel. The translucent edges visible from the side also add to the overall sense of depth. The back cover design is simple and clean, retaining only the OnePlus logo in the center and the metal cube DECO in the upper left corner.

DECO features a square design with rounded corners. The left side houses a dual-camera setup consisting of a main camera and an ultra-wide-angle lens, while the right side features a flash and the ACE series logo.

The black version of "King's Awakening" uses a brand-new "3D stereoscopic lithography" process, placing a large ACE logo in the center of the matte black back cover. Under different angles of light, it will present an effect similar to the logo emitting light.

The phone features a matte, brushed metal frame in the same color scheme. In addition to the power button and volume buttons on the right side, there is a customizable button on the left side of the phone. This button can be set up during the initial login process and can be used to call up the smart assistant or start the game mode.

The device is IP66, IP68, IP69, and IP69K rated for water and dust resistance. Both the front and back surfaces are equipped with OPPO Crystal Shield Glass, which enhances the device's wear resistance, drop resistance, and waterproof performance.

The phone features a 6.78-inch 2772×1272 1.5K 165Hz ultra-high refresh rate screen on the front. The screen's normal maximum brightness is 800 nits, and it can reach a maximum of 1800 nits when globally activated. With 25% APL, the brightness can reach a maximum of 3500 nits. It also supports a "sun display" mode, so it is not affected by outdoor use.

OnePlus emphasizes that this screen has higher color accuracy, displaying a clear picture while showing more details in dark areas. There is also a display enhancement function in the game, so that people in dark places can see more clearly.

In addition, the OnePlus Ace 6 Ultra supports the new generation of "Eye Protection," featuring 3840Hz PWM dimming and 4.5% low blue light display, as well as a gaming low-light eye protection mode. Internally, it's equipped with a Display P3 Lite graphics chip, supporting 100% DCI-P3, HDR10+, Dolby Vision, ZREAL, and HDR Vivid displays, making it a flagship-level screen with comprehensive refresh rates and display capabilities.

In terms of performance, the OnePlus Ace 6 Ultra is equipped with the Dimensity 9500 mobile platform, with built-in LPDDR 5X RAM and UFS 4.1 storage combination. Under normal temperature conditions, it scores 3,410,548 in AnTuTu benchmark.

The phone is equipped with a new generation of "Windspeed Gaming Core," and also features a three-core combination consisting of the new generation Lingxi Touch Core and the G2 Pro e-sports network core. This combination supports up to:

  • 165fps, 144fps, 120fps, infinite full frame rate
  • Native 165fps GPU rendering superframe
  • The Lingxi touch chip supports a maximum instantaneous touch sampling rate of 4000Hz.

The controls for playing Genshin Impact and Arknights: End of the World are quite good. Even at the highest graphics settings, there's no noticeable lag, and the combat is smooth and comfortable, providing a satisfying sense of impact. No external controller is needed; the device itself offers a great experience.

The same applies to "Peacekeeper Elite". The new generation of Lingxi touch chip has improved touch response and accuracy, reduced the number of missed touches, and made shooting accuracy and response better than before when operating the device without a keyboard.

The phone's built-in cooling structure has also been upgraded, featuring a new generation of Glacier Cooling System, which consists of a large area of ​​Glacier Cooling VC and 2K supercritical Glacier Graphite. The phone's internal cooling layout is optimized for gamers' hand shapes, preventing excessive holding during intense gameplay and ensuring a comfortable grip.

In reality, the OnePlus Ace 6 Ultra doesn't get too hot during normal use, whether it's fast charging or playing games in performance mode. It does get a bit hotter around the top of the DECO back cover while gaming, but you can avoid holding it in that area.

In terms of battery life, the Ace 6 Ultra is equipped with a mainstream 8600mAh Glacier Battery, which can last for about 2 days under normal and moderate use. It also features 120W SuperVOOC Flash Charge, which is quite fast for an 8000mAh phone, allowing it to be fully charged in about 50 minutes.

In addition, the Ace 6 Ultra performs well with universal fast charging.

Our tests with AI Power Bank Ultra showed that the OnePlus Ace 6 Ultra's universal fast charging can reach 48W, charging to 60% in 30 minutes and fully charging in an hour. This performance is more than sufficient for users who don't want to carry a dedicated USB AC charging kit.

In terms of imaging, the OnePlus Ace 6 Pro adopts a dual-camera setup consisting of a main camera and an ultra-wide-angle lens.

  • Main camera: 50MP sensor, 6P lens with an equivalent focal length of 23mm, F1.8 aperture, dual-axis OIS image stabilization.
  • Ultra-wide: 8-megapixel sensor, 5P lens equivalent to 16mm, aperture F2.2

Along with the Ace 6 Ultra, two accessories were also released: the OnePlus Strix G15 controller and the OnePlus 40W Super Ice Point Magnetic Cooler with corresponding color options.

The Gunslinger controller features a wraparound design with a built-in USB-C port at both ends, with no obstructions on the sides, seemingly to allow space for an extended adapter structure. The overall design uses a classic "white + metallic red" color scheme, reminiscent of a "awakening" aesthetic, and the metallic red stripe along the edge of the controller has a lighting-like effect under light.

The USB-C port inside the controller has a hinge to prevent breakage during installation. A USB-C port is pre-installed on the underside of the right-hand grip, so charging while gaming won't obstruct your grip.

The inside is lined with heat-conducting material and has a square rounded hole that is compatible with the DECO metal cube design. In addition, there is a stretching structure in the middle of the handle. As long as the DECO can be used with the ColorOS series phones with the slot, this handle can be used.

The OnePlus Gun God controller features a basic grip design that fills the hand's grip space. There are two metallic red trigger buttons (L and R) on each side of the controller, and two more buttons on the inside. For FPS games, these buttons can handle basic operations such as shooting, reloading, jumping, and aiming. The touchscreen can then be used primarily for movement and visual control.

The buttons support a maximum polling rate of 1000Hz, and the trigger buttons use micro-mechanical buttons with an ultra-short key travel of 0.7mm, which ensures both tactile feedback and improved touch response.

After the phone is connected, game mapping can be set in the game assistant. Up to six settings can be saved, and users can set and switch them according to different game types. If it is installed on a OnePlus Ace 6 Ultra, there will also be a corresponding startup animation.

The controller is equipped with an e-sports antenna, which can improve signal reception and ensure network stability during gameplay.

The magnetic cooling cover here is the same as the previously released Space Silver OnePlus 40W Super Ice Point Magnetic Cooler. This time, the color has been adapted for the gamepad to enhance the sense of unity. This color is named "Heartflow White".

The controller comes with a protective cover; simply remove it before installing the controller. The USB-C port on the heatsink is located on the top of the device, so it won't be obstructed by magnetic attachment.

Finally, let's look at the price. The OnePlus Ace 6 Ultra also comes in five storage versions, offering a maximum storage option of 16GB + 1TB:

  • 12GB+256GB: 3799 yuan, 3499 yuan after national subsidy.
  • 12GB+512GB: 4399 yuan, after national subsidy: 4099 yuan
  • 16GB+256GB: 4099 yuan, after national subsidy: 3799 yuan
  • 16GB+512GB: 4699 yuan, after national subsidy: 4399 yuan
  • 16GB+1TB for 5399 yuan, after national subsidy 5099 yuan.

  • OnePlus Gun God Game Controller: Pre-sale price 449 yuan
  • OnePlus 40W Super Ice Point Magnetic Cooler, Heartflow White: 229 yuan
"Buy it, it's not expensive."

#Welcome to follow iFanr's official WeChat account: iFanr (WeChat ID: ifanr), where more exciting content will be presented to you as soon as possible.

Morning Briefing | Apple: Memory Cost Pressures to Increase Significantly Next Quarter / Unitree Releases Cheapest Humanoid Robot / May 1st Highway Traffic May Set Record

cover

The iPhone 18 Pro may receive its most powerful camera upgrade yet: a larger telephoto aperture and a new "Siri mode".

DeepSeek's paper was published and then deleted, prematurely revealing its visual reasoning solution that "gives AI fingers."

Apple's gross margin hit a record high, but memory cost pressures will "significantly increase."

Samsung's Q1 chip profits surged 49 times, with a single division consuming 94% of the group's total operating profit.

Dreame CEO Yu Hao responded to the requirement for all employees to open social media accounts: It's to cultivate "composite capabilities," as single capabilities will be replaced in the AI ​​era.

Seres sold 78,500 new energy vehicles in the first quarter, with R&D expenses increasing by 70.7% year-on-year.

Xiaohongshu establishes a first-level AI department

A new method for cracking D encryption has emerged, forcing legitimate players to "check in" online every 14 days.

Ultraman: Codex will become my primary way of interacting with computers.

The model kept saying "goblin," and it took OpenAI months to find the root cause.

Yu Minhong responds to controversy over 1.8 million shares of Oriental Selection: "They forced me to give them to me," he said, adding that he would donate them all after fulfilling his promise.

Qualcomm's second-quarter revenue reached $10.6 billion, with automotive chip sales hitting a record high.

Tesla's first mass-produced all-electric semi-trailer rolls off the assembly line.

OpenAI co-founder: In the era of Software 3.0, Prompt is the new code.

Microsoft CEO: New deal with OpenAI is a "sure thing" for Microsoft.

Starting at 26,900 yuan, Unitree Robotics releases its R1 series of dual-arm humanoid robots.

Alibaba launches "digital employee" QoderWake

NVIDIA releases the full-modal model Nemotron 3 Nano Omni

Bailing's open-source trillion-level synergistic flagship model, Ling-2.6-1T.

Qwen-Scope, a large open-source model for answering a thousand questions.

⚠

May Day Travel Warning: Highway Traffic May Reach Record High on May 1st

Shanghai releases its first labor rules agreement for the express delivery industry: No fines may be imposed for complaints that have not been verified.

Big news

The iPhone 18 Pro may receive its most powerful camera upgrade yet: a larger telephoto aperture and a new "Siri mode".

According to Bloomberg, the iPhone 18 Pro series, to be released this fall, is expected to see the biggest camera hardware upgrade in the product line's history. Journalist Mark Gurman, citing leaks, indicates that the iPhone 18 Pro's main camera is expected to feature variable aperture technology, and the telephoto lens will also adopt a larger aperture.

Apple also plans to add a "Siri" mode to the Camera app in the upcoming iOS 27, replacing the existing standalone visual intelligence interface and placing it alongside the traditional photo and video options.

In this mode, users can use services like ChatGPT to ask questions about the content in the image, or use Google to perform image searches. Notably, the report also states that Apple will redesign the shutter button for Siri mode to fit the Apple Intelligence style.

Gurman believes that the combination of these cameras with AI hardware and software will pave the way for a series of Siri-based wearable devices that Apple plans to launch next, including new AirPods, smart glasses, and pendants.

DeepSeek's paper was published and then deleted, prematurely revealing its visual reasoning solution that "gives AI fingers."

Last night, DeepSeek published a new paper on multimodal reasoning, "Thinking with Visual Primitives," but the related tweets and GitHub page were deleted a few hours later. APPSO had read the entire paper before that.

 Related reading: What exactly did DeepSeek's newly deleted paper say?

The paper points out that current multimodal large models suffer from a "reference gap" in visual reasoning—the model can see the image clearly, but cannot accurately point to the specific object in it during the reasoning process.

DeepSeek's solution is to allow the model to directly output image coordinates (bounding boxes or coordinate points) in the thought process, embedding the "pointing" action into the reasoning process itself, rather than just as the final answer output. Researchers liken this to the human cognitive method of "pointing and thinking at the same time".

In terms of efficiency, Gemini-3-Flash requires approximately 1100 tokens for images of the same size, Claude-Sonnet-4.6 requires approximately 870, GPT-5.4 requires approximately 740, while DeepSeek only uses 90 information units, freeing up all computing power for coordinate labeling during inference.

In the "maze navigation" benchmark test, DeepSeek leads with an accuracy of 66.9%. For reference, GPT-5.4 has an accuracy of 50.6%, Gemini-3-Flash has an accuracy of 49.4%, and Claude-Sonnet-4.6 has an accuracy of 48.9% (the random guess accuracy rate for this task is 50%, and the latter three are close to random levels).

The paper also acknowledges the existing limitations: the coordinate accuracy is still insufficient in fine scenes (the finger counting failure is a direct manifestation of this); visual primitives require specific trigger words to activate, and the model cannot yet autonomously determine when to use them; topological reasoning has limited generalization ability outside the training distribution.

large companies

Apple's gross margin hit a record high, but memory cost pressures will "significantly increase."

On April 30th local time, Apple released its fiscal second quarter 2026 results (ending March), reporting revenue of $111.18 billion, a 17% year-over-year increase, and EPS of $2.01, both exceeding Wall Street expectations. The performance of each business segment is as follows:

  • iPhone revenue was $56.99 billion, up 22% year-over-year, slightly below LSEG analysts' expectations of $57.21 billion;
  • Service revenue reached $30.98 billion, a year-on-year increase of approximately 16%, exceeding the expected $30.39 billion;
  • Mac revenue reached $8.4 billion, exceeding the expected $8.02 billion;
  • iPad revenue reached $6.91 billion, exceeding the expected $6.66 billion;
  • Revenue from wearables, home, and accessories reached $7.9 billion, exceeding the expected $7.7 billion.
  • The gross margin reached 49.3%, higher than the previous quarter's 48.2% and also exceeded the expected 48.4%.

Two key highlights to watch this quarter:

  • Revenue in Greater China reached US$20.5 billion, a significant increase of 28% year-on-year, resuming strong growth momentum;
  • The service business continued to boost overall profit margins, with gross margin rising for several consecutive quarters, currently at a record high of 49.3%; R&D expenditures increased by 33% year-on-year to US$11.42 billion.

Looking ahead to the current fiscal quarter (June quarter), Apple expects revenue to grow by 14%-17% year-over-year, significantly exceeding analysts' previous forecast of 9.5% growth. CFO Kevan Parekh pointed out that memory cost pressures will "significantly increase" next quarter due to a global memory shortage driven by AI demand, and the company will "evaluate various solutions" to address this.

Samsung's Q1 chip profits surged 49 times, with a single division consuming 94% of the group's total operating profit.

According to Reuters and CNBC, Samsung Electronics reported revenue of 133.9 trillion won (approximately US$90 billion) in Q1 this year, up about 69% year-on-year; operating profit was 57.2 trillion won, up about 8.5 times year-on-year, a record high, exceeding analysts' expectations of 55.28 trillion won.

The DS (Device Solutions, including memory chips and wafer foundry) division reported Q1 revenue of 81.7 trillion won, a year-on-year increase of 225%; operating profit was 53.7 trillion won, compared to only about 1.1 trillion won in the same period last year, an increase of about 49 times, accounting for 94% of the group's overall operating profit , almost squeezing out the profit margins of other divisions.

In contrast, during the same period last year, the MX (mobile and network) division accounted for 64% of the group's profits with 4.3 trillion won, while the DS division accounted for only 16%. In Q1 of this year, the operating profit of the MX division plummeted to 2.8 trillion won, a year-on-year decline of 35%; the operating profit of the display division also fell by 20% to 400 billion won.

The surge in AI data center construction was the core driver this quarter. Samsung stated frankly in its earnings call that "demand fulfillment has fallen to a historic low," with customers locking in orders for next year due to concerns about supply shortages, and the supply-demand gap is expected to widen further next year.

Samsung also disclosed that it had already achieved mass production of HBM4 chips and started shipping them to Nvidia's Vera Rubin platform in February this year. The revenue target for HBM this year is to increase more than three times year-on-year, and it is accelerating its efforts to catch up with SK Hynix's leading position in the HBM market.

 Related reading: Samsung's Memory Products Show No Mercy, Forcing Samsung Mobile Phones to Suffer Losses

Dreame CEO Yu Hao responded to the requirement for all employees to open social media accounts: It's to cultivate "composite capabilities," as single capabilities will be replaced in the AI ​​era.

Yesterday, Yu Hao, founder and CEO of Dreame Technology, posted on Weibo, requiring "every Dreame employee to open social media accounts on all platforms" and to post three videos every day, each lasting 15 minutes, introducing the products, technologies, or selling points and core innovations of the company's products under development.

He also disclosed the reward mechanism: 10,000 genuine followers will receive a reward of 10,000 yuan, 50,000 followers will receive 50,000 yuan, and 100,000 followers will receive 100,000 yuan. Yu Hao stated that the staff of ZhuiMi alone need to manage at least 20,000 accounts.

That same evening, Yu Hao published another lengthy article responding to external criticisms, characterizing the requirement as a measure to cultivate employees' "composite abilities."

Engineers are naturally adept at handling complex parameters and solving thousands of technical problems, but they are not good at communicating with people or explaining things clearly and simply! We want to train everyone to speak in plain language and explain their products and technologies in simple and easy-to-understand terms!

He also clarified that those who already have hundreds of thousands of followers joining the Chase will not directly receive corresponding rewards, "that's still a single ability"; only those who master both technical and communication skills meet the "composite ability" standard and will then be given corresponding incentives.

Yu Hao also stated that in the AI ​​era, single skills are easily replaced, and only those who master multiple skills can "master AI, manage complex systems, and lead larger teams."

Seres sold 78,500 new energy vehicles in the first quarter, with R&D expenses increasing by 70.7% year-on-year.

Yesterday, Seres Group released its first quarter report for 2026. In the first quarter of this year, it achieved revenue of 25.75 billion yuan, a year-on-year increase of 34.5%; net profit attributable to shareholders of the listed company was 750 million yuan, a slight year-on-year increase of 0.9%; total profit was 850 million yuan, a year-on-year decrease of 4.8%; and sales of new energy vehicles reached 78,500 units.

  • Operating costs were RMB 18.99 billion, corresponding to a gross profit margin of approximately 26.2%, which narrowed slightly compared to the same period last year;
  • Research and development expenses amounted to RMB 1.79 billion, a year-on-year increase of 70.7%, and an increase of approximately RMB 740 million compared to the same period last year;
  • Sales expenses amounted to 3.72 billion yuan, representing a year-on-year increase of 39.7%.
  • Financial expenses turned from a net income of RMB140 million in the same period last year to a net expenditure of RMB95.81 million.

It is worth noting that the net profit attributable to the parent company after deducting non-recurring gains and losses was only RMB 103 million, a significant year-on-year decrease of 73.9%, which is significantly different from the GAAP net profit attributable to the parent company of RMB 750 million. The difference mainly comes from the RMB 630 million in non-recurring gains and losses this quarter, of which government subsidies contributed RMB 630 million.

Xiaohongshu establishes a first-level AI department

According to CLS News Agency, Xiaohongshu officially announced a new round of organizational upgrades yesterday through an internal memo to all employees. Conan (real name Ding Ling) has been promoted to president, overseeing the three core businesses of community, e-commerce, and commercialization, as well as the technology system, and will report directly to CEO Seiya (Mao Wenchao).

AI is the core keyword of this adjustment. Xiaohongshu has established a first-level AI department, Dots , which reports directly to Conan and is positioned to build a full-chain technology system from model development and infrastructure to product application; at the same time, an Enterprise Intelligence Department has been established, integrating the original Enterprise Efficiency Department and Data Science Department.

In terms of internationalization, Xiaohongshu has officially established its overseas business unit, Rednote, which reports directly to the CEO; the cross-border e-commerce platform Redshop is expected to launch in June this year. In addition, the innovation incubation department, Lab1327, has been established simultaneously, led by product design head Sakuragi.

A new method for cracking D encryption has emerged, forcing legitimate players to "check in" online every 14 days.

Recently, in response to cracking, Denuvo (D Encryption) and 2K Games jointly implemented stricter licensing restrictions on several of their games, sparking a strong backlash from the player community.

The affected games include NBA 2K25, NBA 2K26, and Marvel's Nightborne. These games now use offline authorization tokens with a fixed validity period, which will automatically expire in approximately 14 days.

Regardless of whether players change their hardware or reinstall their system, once the token expires, the game will be unable to start and players must connect to the internet to re-obtain authorization to continue playing.

Earlier this year, the hacking groups MKDev and DenuvOwO developed a Hypervisor-based bypass scheme (HVB) that installed a kernel-level driver to intercept and simulate Denuvo's verification requests, enabling almost all single-player games protected by Denuvo to be cracked or bypassed.

Ultraman: Codex will become my primary way of interacting with computers.

At the Stripe annual developer conference, OpenAI CEO Sam Altman recently stated that Codex is experiencing "explosive growth" and that this AI programming tool will become his primary way of interacting with computers.

He attributed this surge to the overall leap in model inference capabilities, the closed loop of user feedback in code scenarios, and the combined effect of data accumulation, adding that "once you know something is possible, it's easier to go all out to do it."

Codex's core users are still primarily programmers, but Altman revealed that its usage in non-programming scenarios has exceeded expectations .

OpenAI's goal is to make Codex more than just programming, but to cover "everything you do in front of your computer." Altman admitted that the non-programming part is currently "only about 10% complete," but he expects it to "catch up quickly" as real users join.

Supporting all of this is OpenAI's increasingly aggressive approach to model training. Altman was previously asked on The Atlantic CEO Nicholas Thompson's podcast: Has OpenAI ever run a model trained entirely on synthetic data?

He paused for a moment and said, "I'm not sure if I should say this"—a statement that was almost an implicit admission. He then explained that the core capability of the model is reasoning, and reasoning can be learned entirely from purely synthetic data .

He used mathematics as an example: Can a model that has never seen human data calculate better than humans? "I think it can." But understanding human values ​​is different. "A model that has never been exposed to human culture is unlikely to be able to do it."

Thompson had previously mentioned that GPT-4 was "the last model that didn't use much AI data," a view Altman agreed with.

The model kept saying "goblin," and it took OpenAI months to find the root cause.

Yesterday, OpenAI published an article reviewing the "goblin" problem that has plagued multiple generations of GPT models—starting from GPT-5.1, the model has increasingly used fantasy creatures such as goblins and goblins as metaphors in its responses.

Data shows that after GPT-5.1 was launched, the frequency of the word "goblin" in ChatGPT conversations increased by 175%, and "gremlin" increased by 52%. By GPT-5.4, the problem had fully erupted.

The article points out that the problem is related to ChatGPT's "Nerdy" personality customization feature. This personality type's system prompts require the model to "use the fun of language to defuse seriousness" and "acknowledge the strangeness of the world and enjoy it."

During training, reward signals used to reinforce this personality style consistently scored higher on outputs containing fantasy creature vocabulary, a bias observed in 76.2% of the dataset. The "nerd" personality, while accounting for only 2.5% of all ChatGPT responses, contributed 66.7% of "goblin" mentions.

Currently, OpenAI has taken the personality offline, removed related reward signals, and filtered training data, and stated that a new model behavior auditing tool has been implemented.

 Related reading: Who stuffed a bunch of "monsters" into GPT-5.5's brain?

Yu Minhong responds to controversy over 1.8 million shares of Oriental Selection: "They forced me to give them to me," he said, adding that he would donate them all after fulfilling his promise.

According to Bianews, Oriental Selection recently issued an announcement stating that, in accordance with the 2023 share incentive plan, it has granted 19.3014 million shares to the company's directors, senior executives and core employees, involving 302 people, accounting for 1.82% of the issued shares. The closing price on the grant date was HK$28.44 per share.

Among them, Yu Minhong, Executive Director, Chairman and CEO, was granted 1.8 million shares, accounting for 0.17% ; Yin Qiang, Executive Director and CFO, was granted 450,000 shares, accounting for 0.04%; and the remaining 300 employees were granted a total of 17,051,400 shares, accounting for 1.61%.

This share grant immediately sparked external doubts, with public opinion focusing on the claim that he was "rewarding himself." Yesterday, Yu Minhong responded in a post on his personal social media platform.

Yu Minhong revealed that he initially refused and had never received any salary since the establishment of Oriental Selection. However, the board of directors, representing the shareholders, believed that without equity incentives, his efforts and rewards would be "unequal," and ultimately persuaded him to accept the grant.

Yu Minhong also promised that after the equity is cashed out and tax obligations are fulfilled, all cash proceeds will be used in three ways: to establish a chairman's reward fund to reward employees who have made outstanding contributions to Oriental Selection; to donate to the New Oriental Foundation, all of which will be used to support primary and secondary school students in rural areas; and to donate a portion to Peking University to help students from rural areas.

Qualcomm's second-quarter revenue reached $10.6 billion, with automotive chip sales hitting a record high.

Yesterday, Qualcomm released its second fiscal quarter results (ending March 29):

Revenue for the quarter was $10.6 billion, down 3% year-over-year; GAAP net income was $7.37 billion, up 162% year-over-year, primarily driven by a one-time tax benefit; Non-GAAP diluted earnings per share were $2.65, down 7% year-over-year.

  • The semiconductor business, QCT, reported revenue of $9.076 billion, a 4% year-over-year decline, with its pre-tax profit margin narrowing to 27% (compared to 30% in the same period last year).
  • Mobile chip revenue was $6.024 billion, down 13% year-on-year, dragged down by tight memory supply and weak demand from some mobile phone OEMs;
  • Revenue from automotive chips reached $1.326 billion, a 38% year-over-year increase, setting a new quarterly record .
  • IoT revenue reached $1.726 billion, a year-on-year increase of 9%;
  • QTL's licensing business generated $1.382 billion in revenue, a 5% year-over-year increase, with a pre-tax profit margin rising to 72%.

Qualcomm expects third-quarter revenue to be between $9.2 billion and $10 billion, with non-GAAP diluted earnings per share guidance of $2.10 to $2.30. Memory supply constraints and related price pressures will continue to affect demand from some mobile phone OEMs. QCT mobile phone revenue from Chinese customers is expected to bottom out in the third fiscal quarter and then resume sequential growth.

Tesla's first mass-produced all-electric semi-trailer rolls off the assembly line.

Yesterday, Tesla officially announced that the first production vehicle of its "all-electric semi-trailer truck" Semi has officially rolled off the dedicated high-capacity production line at its Nevada Gigafactory.

According to the final production specifications announced in February of this year, the vehicle will be offered in two versions: a standard range version with a range of 325 miles when fully loaded with a gross weight of 82,000 pounds (priced at approximately $260,000), and a long-range version with a range of 500 miles (priced at approximately $290,000), making it the lowest-priced Class 8 all-electric tractor currently on the market.

In terms of core technical parameters, both models are equipped with an 800 kW three-motor powertrain with a maximum power of 1072 horsepower, and support 1.2 MW Megacharger fast charging, which can restore 60% of the driving range within about 30 minutes of legal rest time.

In addition, the Nevada factory has achieved a high degree of vertical integration, with the supporting 4680 battery cells manufactured in the same factory area, directly eliminating the supply chain bottlenecks that previously limited Semi's production capacity.

OpenAI co-founder: In the era of Software 3.0, Prompt is the new code.

Recently, Andrej Karpathy, an AI researcher and co-founder of OpenAI, proposed the new paradigm concept of "Software 3.0" in an interview at the Sequoia Capital AI Ascent Summit, and said that even top engineers like himself feel "unprecedentedly lagging behind" in the current AI wave.

Kapaci divides software development into three stages: Software 1.0 is explicit code written by humans, Software 2.0 is neural network weights obtained through data training, and Software 3.0 uses Large Language Models (LLMs) as the core computational interpreter, with developers manipulating the model through prompting engineering.

He likened the context window to "a lever for manipulating the LLM interpreter," arguing that the essence of programming is shifting from writing logic to orchestrating and supervising clusters of intelligent agents.

He considers last December a watershed moment in his personal experience. It was then that he began using AI agent tools extensively for programming and discovered that the latest models generated code snippets that were "near perfect." This newfound trust also made him realize that traditional programming skills were undergoing irreversible devaluation.

However, Kapacsi also pointed out that current AI models have a clear problem of "uneven intelligence": large models are extremely capable in areas that can be clearly verified, such as mathematics and code, but they still frequently make mistakes in common-sense logic, such as the classic "walk 50 meters to the car wash".

He believes that developers skilled in agent engineering will achieve efficiency gains far exceeding those of traditional "10x programmers." He also suggests that companies completely restructure their recruitment processes, replacing traditional algorithm-based assessments with "assigning large, complete projects and observing how candidates use agent tools to build and defend systems."

Regarding the core values ​​of humanity, Kapacsi believes that even if AI agents can handle all the details of the underlying API, humans remain an indispensable bottleneck in the system—responsible for controlling the architectural aesthetics, logical boundaries, and deciding "what to build" and "why it is worth doing."

 Microsoft CEO: New deal with OpenAI is a "sure thing" for Microsoft.

According to TechCrunch, Microsoft CEO Satya Nadella spoke positively about the financial impact of the revised partnership agreement with OpenAI on the company during yesterday's earnings call. Nadella stated that the new agreement is beneficial to all parties involved.

We are satisfied with our collaboration with OpenAI. I always place great importance on ensuring that any collaboration results in a win-win situation, which is also a prerequisite for maintaining a good partnership.

He emphasized that under the new agreement, Microsoft retains access to OpenAI's intellectual property until 2032, including its cutting-edge models and AI Agent products, without having to pay for them anymore.

Nadella seemed unconcerned about OpenAI's earlier announcement of an exclusive partnership with Amazon, Microsoft's biggest cloud computing competitor. Microsoft's latest quarterly financial report shows that its AI business has exceeded $37 billion in annualized revenue, a year-on-year increase of 123%.

Nadella also pointed out that Microsoft's revenue streams from OpenAI extend beyond this. Previously, OpenAI reached an agreement with Microsoft, committing to purchase over $250 billion worth of Microsoft cloud services; Microsoft also holds a 27% stake in OpenAI.

In addition, Nadella emphasized that enterprise customers typically tend to use multiple AI models simultaneously, and OpenAI's relative advantage in the industry, especially in the enterprise market, is not as prominent as it used to be.

We offer the widest range of models among all hyperscale cloud service providers, allowing customers to choose from OpenAI, Anthropic, open-source models, and more, depending on their workload. Currently, over 10,000 customers are using more than one model.

New products

Starting at 26,900 yuan, Unitree Robotics releases its R1 series of dual-arm humanoid robots.

Yesterday, Unitree Robotics officially released its R1 series of dual-arm humanoid robots, with a starting price of 26,900 yuan. It is currently the cheapest commercially available humanoid robot , featuring "ultra-fast deployment and multi-scenario application" and targeting various application scenarios such as industry and commerce.

Four models are available: R1-A5, R1-A7, R1-A5-D and R1-A7-D, corresponding to 5-DOF and 7-DOF single-arm configurations respectively. They can be equipped with a two-finger gripper, a three-finger dexterous hand or a five-finger dexterous hand, and two mounting methods: a fixed base and a mobile chassis. Power supply supports external power supply or lithium battery, with a battery life of about 1.5 hours.

Taking the R1-A5 as an example, the total degrees of freedom of the machine is 15, while the R1-A7 reaches 19. The degrees of freedom of a single arm are 5 and 7 respectively, the waist has 1 degree of freedom, and the head has 2 degrees of freedom. The end effector gripper has an accuracy of ±0.1mm, and the maximum load of the arm is about 2kg.

The fixed base version (R1-A5/A7) measures 520×440×683mm in its retracted state and 520×440×1323mm when raised, with a total weight of approximately 11kg (R1-A5) to 13kg (R1-A7). It requires external power. The mobile chassis version (R1-A5-D/A7-D) weighs approximately 30kg and 32kg respectively.

This series comes standard with a binocular vision module, and both the body and head are equipped with an 8-core high-performance CPU. The head module provides an additional 10 TOPS of computing power, and can be upgraded to an NVIDIA Jetson Orin high-computing-power module, with a maximum computing power of 100 TOPS; it also supports WiFi 6 and Bluetooth 5.2.

In terms of development ecosystem, Unitree has opened up interfaces for underlying layers, robotic arms, audio, lighting, and vision control, supporting drag-and-drop teaching and providing developers with full-stack secondary development capabilities.

Alibaba launches "digital employee" QoderWake

Yesterday, Alibaba released two agent products: QoderWake, a digital employee product, and the Qoder mobile app, covering enterprise and individual use cases.

QoderWake is positioned as the industry's first secure, controllable, and continuously evolving production-grade digital employee product, capable of assuming specific roles such as software engineer, operations, and analyst in real-world work environments.

  • Adopting an innovative Harness-First architecture, after task execution, the system will categorize and accumulate the experience into five dimensions: memory, skills, strategies, verification rules, and workflow, solving the problem of "forgetting after completion" in general AI agents;
  • It can autonomously execute tasks according to preset rules, and automatically trace the task trajectory and proactively review the process;
  • We can continuously eliminate outdated experiences, merge conflicting strategies, and withdraw ineffective strategies to ensure that they become more accurate with use.

Currently, QoderWake has launched the "Digital Programmer" role, which has been officially implemented within Alibaba. It can autonomously complete tasks such as feedback classification, log analysis, root cause identification, and automatic generation of repair code. The entire process is unattended, with human intervention only for final confirmation in certain scenarios.

The Qoder mobile app allows users to remotely control the desktop version of Qoder to complete tasks. The application can directly display the thought process and workflow during interaction with the AI ​​agent, and supports proactive pop-ups and user confirmation of details to enhance the overall user experience.

NVIDIA releases the full-modal model Nemotron 3 Nano Omni

NVIDIA yesterday released the Nemotron 3 Nano Omni, an open multimodal model that integrates vision, audio, and language capabilities into a single system, designed specifically for AI agent workflows.

This model employs a 30B-A3B hybrid expert model (MoE) architecture, enabling multimodal reasoning without an independent perceptual model. While maintaining the same interaction performance, its throughput is 9 times higher than similar open-ended full-modal models. Key highlights include:

  • Computer Operation: Supports native input resolution of 1920 x 1080. H Company has made significant progress in OSWorld benchmark tests based on its computer operation AI agent.
  • Document intelligence: It can parse documents, charts, tables and mixed media, and support coherent reasoning between visual structure and text content;
  • Audio and video understanding: Integrates voice, video, and recorded content into a single inference stream, suitable for customer service, research, and monitoring scenarios.

The model is released with open weights, supporting deployment across all scenarios from local devices such as NVIDIA Jetson and NVIDIA DGX Spark to data centers and cloud environments, and can be used in conjunction with other models in the NVIDIA Nemotron series or third-party proprietary models.

Bailing's open-source trillion-level synergistic flagship model, Ling-2.6-1T.

Yesterday, Bailing officially open-sourced its trillion-level integrated flagship model, Ling-2.6-1T, which is specifically optimized for scenarios such as Agent, Coding, Knowledge Management, and Automated Office in real production environments. It aims to reduce the output cost at the same level of intelligence by using more efficient "fast thinking".

In the comprehensive evaluation of Artificial Analysis, Ling-2.6-1T achieved an Intelligence Index of approximately 34 points with about 16M output tokens, entering the high-attractiveness range and demonstrating comprehensive intelligence performance on par with GPT-4.5 (Non-Reasoning).

Ling-2.6-1T has also reached the open-source state-of-the-art (SOTA) level on multiple key benchmarks, possessing multi-dimensional capabilities from web page and design generation, code development to written text generation.

 Hugging Face: huggingface.co/inclusionAI/Ling-2.6-1T

 ModelScope: modelscope.cn/models/inclusionAI/Ling-2.6-1T

Qwen-Scope, a large open-source model for answering a thousand questions.

Yesterday, the Qianwen team released and open-sourced the large model interpretability module Qwen-Scope. By inserting a sparse autoencoder (SAE) into the hidden layer of the Qwen model, it "translates" the complex parameter operations inside the model into human-understandable feature concepts, thereby enabling the analysis and targeted intervention of model behavior.

  • Reasoning Control: No need to modify prompts; simply activate features to change the output language or writing style. For example, disabling the "Chinese Feature" eliminates the problem of abnormally mixed Chinese words in English replies, while activating the "Classical Chinese Feature" switches the continuation writing style from vernacular to classical Chinese.
  • Data processing: Toxic content classification can be completed with only a small number of seed samples, without the need to train an additional classifier; compared with traditional methods, the energy efficiency of training data is improved by about 15 times with the targeted synthesis of supplementary data;
  • Training optimization: In the supervised fine-tuning stage, by designing a loss function for anomalous activation features, the frequency of low-quality responses such as language mixing can be significantly reduced; in the reinforcement learning stage, by controlling features to increase the sampling probability of low-frequency anomalies such as "repeated generation", the model optimization is accelerated.
  • Evaluation and Redundancy Removal: By calculating the feature activation overlap between different evaluation sets, duplicate evaluation issues were identified. Analysis revealed that the overlap between GSM8K and MATH was as high as 0.63, and the overlap between MMLU-Pro and SuperGPQA was 0.50, thus casting doubt on the actual reference value of some commonly used evaluation sets.

 Hugging Face: huggingface.co/spaces/Qwen/QwenScope?spm=a2ty_o06.30285417.0.0.65e5c921MGq3Tu

 ModelScope: modelscope.cn/studios/Qwen/QwenScope?spm=a2ty_o06.30285417.0.0.65e5c921FZvQi4

New consumption

May Day Travel Warning: Highway Traffic May Reach Record High on May 1st

According to CCTV.com, the Road Network Center of the Ministry of Transport recently released the "Analysis Report on the Operation of the National Highway Network during the May Day Holiday". It is estimated that the weather across the country will be generally favorable for travel during this year's May Day holiday, with an average daily traffic volume of about 64 million vehicles on national highways.

According to the report, the average daily traffic volume on national highways during the five-day May Day holiday this year is expected to exceed 60 million vehicles, with the peak traffic volume expected on May 1st, reaching 70 million vehicles, setting a new record for single-day traffic volume during the May Day holiday. National highways will be toll-free during the holiday.

The report also pointed out that the congested road sections during the holiday were mainly concentrated in provinces such as Jiangsu, Hubei, Hunan, and Anhui, as well as major highways such as Shenhai, Huyu, and Hurong. Travelers are advised to plan their routes in advance and travel during off-peak hours.

According to another report, if you accidentally miss your stop, you can tell the train conductor and the railway department will arrange the next available train to take you back free of charge.

Shanghai releases its first labor rules agreement for the express delivery industry: No fines may be imposed for complaints that have not been verified.

According to The Paper, Shanghai's first "2026 Shanghai Express Delivery Industry Labor Rules Agreement" was officially signed yesterday morning, covering nine leading platform companies in Shanghai, comprising ten chapters and 47 articles. The main contents cover:

  • Income Guarantee: For couriers who have worked in Shanghai for more than one month, the minimum monthly wage will not be less than 110% of the Shanghai standard; the direct delivery fee for each order will generally not be less than 25% of the due delivery fee.
  • Complaint penalties: Companies are prohibited from penalizing delivery personnel solely based on user complaints without verification , and efforts should be made to shift towards positive incentives;
  • Labor protection: Establish a mechanism for suspending delivery and collection during extreme weather and exempting enterprises from penalties; enterprises must regularly organize safety training and provide protective equipment;
  • The right to rest: Guarantee delivery workers' right to regular leave through methods such as shift work, compensatory leave, and annual leave;
  • Anti-involution clause: Member units are prohibited from engaging in unfair competitive practices such as soliciting customers below cost or providing subsidies at a loss.

Starbucks and "Only Green" create a limited-time Dragon Boat Festival experience.

Yesterday, Starbucks China announced a cross-brand collaboration with the dance drama "Only Greenery," integrating the Eastern artistic conception of "A Panorama of Rivers and Mountains" into products, spaces, and immersive experiences through the three dimensions of "color, form, and meaning."

In terms of core products, Xingbing Zongzi underwent a complete revamp in 2026 with the theme "Only Greenery," launching four flavors: Rose Hawthorn, Taro Mochi, Iced Coconut, and Mung Bean Infused Rice Wine. The packaging also features a three-dimensional roll-up gift box for the first time, which unfolds a classic scene from the "A Panorama of Rivers and Mountains" painting upon opening.

In terms of offline experience, Starbucks will upgrade its Reserve Hangzhou Hefang Street Intangible Cultural Heritage Concept Store into the "Only Green" themed highlight store, presenting the "Green Charm of a Thousand Miles of Rivers and Mountains Special Exhibition", which fully showcases the seven chapters of the dance drama: Unveiling the Scroll, Inquiring about Seal Script, Singing Silk, Searching for Stones, Practicing Brushwork, Tempering Ink, and Entering the Painting.

Beautiful

The comedy film "Three Hearts, Two Minds" is released today.

The comedy film "Three Hearts, Two Minds" was officially released today, coinciding with the May Day holiday.

The film tells the story of Jiang Ruilin, who discovers that her husband Luo Bin has a special relationship with Yu Xiaoyu. After saving Yu Xiaoyu from suicide, the two form a partnership and work together to carry out a series of plans to deal with the "scumbag". The film presents the main storyline of female mutual assistance and counterattack in a comedic way.

The classic thriller "Jaws" is set to release on May 15th.

The classic thriller "Jaws" officially announced its release date as May 15th yesterday. Directed by Steven Spielberg, the film has a Douban rating of 7.8 and tells the story of how the resort island of Amity is thrown into panic by a series of great white shark attacks, and how Sheriff Brody, fisherman Quint, and biologist Hooper team up to fight a life-or-death battle at sea.

The film premiered in the United States in June 1975, when Spielberg was only 28 years old. After its release, Jaws broke the record for the highest-grossing film worldwide at the time, with a cumulative box office of $400 million. It also created the concept of Hollywood's "summer blockbuster" and is regarded in film history as the origin of summer blockbusters.

The official announcement of the import of "Cosmic Giants: Rise of He-Man"

The epic fantasy film "He-Man and the Titans" was officially announced yesterday for release in mainland China. It is scheduled to be released in North America on June 5, while the release date in mainland China is yet to be determined.

The film is adapted from the classic animated IP of the same name from the 1980s. The story tells of Adam, a "corporate slave" who originally lived an ordinary life on Earth. After finding the divine sword, he was able to return to the continent of Etania, only to find that his homeland had been taken over by the dark forces of the Skeleton King and was in ruins.

To protect his home, he once again wields the Power Sword and shouts the iconic slogan that resonates through the memories of a generation—"Give me power! I am He-Man!"—thus beginning an epic battle.

#Welcome to follow iFanr's official WeChat account: iFanr (WeChat ID: ifanr), where more exciting content will be presented to you as soon as possible.

Morning Briefing | Apple iOS 27 May Major Upgrade Photos App / OnePlus and Realme Reportedly Merge / my country’s Token Usage Reached 211 Trillion Yuan Last Year

cover

⚖

The Musk v. Ultraman lawsuit has officially begun. Musk argues that stealing from charities is wrong, while OpenAI counters that Musk is simply being a sour grapes, not getting what he wanted.

DeepSeek is beta testing an "image recognition mode," and a new multimodal model may be about to be released.

Apple's iOS 27 is rumored to be a major upgrade to the Photos app.

With a new round of financing on the way, Anthropic's valuation may exceed $900 billion.

Seeking a comprehensive response to car manufacturing: the "Huawei model" – asset-light, self-developed chips, and betting on the global high-net-worth population.

Last year, the total amount of token transactions in my country reached 21.1 trillion, with the daily average exceeding 100 trillion by the end of the year.

Reports indicate that OnePlus and realme have officially merged into a sub-business unit, with Li Bingzhong appointed as general manager.

☁

Alphabet's Q1 net profit surged 81%, and Google Cloud surpassed the $20 billion mark for the first time.

☁

Azure grows by 40%, AI surges by 123%, Microsoft's cloud business accelerates across the board.

Meta reported first-quarter revenue of $56.3 billion, with net profit surging 61% year-over-year.

Soda Music surpasses NetEase Cloud Music to become the third largest music platform in China

AI word segmenters exhibit "language bias": when asked about Claude in Hindi, the token consumption is more than three times that in English.

The first undergraduate program in "Commercial Artificial Intelligence" in China has been approved, and the University of Science and Technology of China will begin enrolling students this fall.

Xiaomi 13 series supports battery upgrade

Sam Altman: Token-based pricing will eventually become obsolete; OpenAI aims to be an "intelligence factory."

Xiaomi's Xuanjie O3 chip has been revealed, potentially making it the first foldable screen phone to feature this chip.

Tencent IMA launches knowledge agent "copilot"

AntBrains Ling-2.6-flash Open Source

WeChat stickers support posting original images with a resolution of 200 megapixels.

Big news

The Musk v. Ultraman lawsuit has officially begun. Musk argues that stealing from charities is wrong, while OpenAI counters that Musk is simply being a sour grapes, not getting what he wanted.

According to The New York Times and The Verge, the Musk v. OpenAI case officially entered the testimony phase on April 28 in the U.S. District Court for the Eastern District of California in Oakland.

It is worth noting that just the day before the trial, Musk posted more than 20 messages on X, referring to Altman as "Scam Altman." Before the trial began, Judge Yvonne González Rogers summoned Musk to the judge's bench and warned him about his out-of-court behavior.

"How can we proceed with the trial when you keep making things worse outside of court?" Ultimately, both sides agreed to "restrain their statements" on social media.

Musk then appeared as the first witness, positioning himself as "the savior of humanity" and summarizing the future of AI into two possible outcomes: "either a Star Trek-style utopia or a Terminator-style dystopia." He directly called Ultraman a "thief."

Stealing from a charity is wrong. If the defendant is acquitted, it will set a precedent for looting all charities in the United States.

However, The Verge's on-site reporter observed that Musk's testimony fell far short of expectations. He spent a significant amount of time recounting his personal entrepreneurial journey rather than focusing on the core allegations in the case, and even claimed to be the actual driving force behind OpenAI.

I came up with the idea, I named it, I recruited the core team, and I provided all the start-up capital. Other than that, I did nothing.

The statement was followed by a pause, awaiting laughter, but the audience reacted sparsely. When asked to introduce former OpenAI board member Sion Zilis, Musk vaguely replied, "She's my, uh, chief of staff, and, you know"—Zlis is the mother of several of Musk's children. Laughter erupted from the audience, while the jury looked puzzled.

OpenAI's chief legal counsel, William Savitt, offered a completely different narrative in his opening statement: "We are here because Musk didn't get what he wanted. My client has the guts to succeed without him, and Musk doesn't like that."

Savitt presented the jury with internal emails from 2017 showing that Musk's aides had proactively discussed giving him a 55% stake in the for-profit divisions, and pointed out that Musk had never objected to the monetization of OpenAI before ChatGPT became a sensation. "That's sour grapes."

Musk's claim exceeds $150 billion. If he loses, Altman will consolidate his control over OpenAI, allowing the company to proceed with its IPO plan, valued at approximately $730 billion. The trial is expected to last four weeks, and Musk will continue to face cross-examination today.

large companies

DeepSeek is beta testing an "image recognition mode," and a new multimodal model may be about to be released.

DeepSeek launched a "Image Recognition Mode" test yesterday, alongside the existing "Quick Mode" and "Expert Mode," which has complete multimodal image understanding capabilities and is not simply OCR text recognition.

 Related reading: DeepSeek just got a major update! Finally, it "opens its eyes" | Includes numerous real-world tests

Based on real-world testing, DeepSeek's image recognition mode demonstrates high overall accuracy, providing an answer in as little as half a second without activating the "thinking mode." It performs well in recognizing and understanding common scenarios such as movie stills, abstract images, and product photos.

What is even more noteworthy is its thought process: in addition to describing the content of the images, it will actively inquire about the identity of the publisher, the metaphors and subtexts of the images, and will correct itself multiple times during the reasoning process. Even before giving a conclusion, it will spontaneously list questions to verify the premise assumptions one by one, presenting a reasoning logic that is close to human reading habits.

However, the image recognition mode still has significant limitations. In the classic "finger counting" test, DeepSeek made a mistake on its first attempt, claiming it was "dizzy from counting," but was able to give the correct answer after user guidance or hints.

In addition, the image recognition process does not currently support online search and relies solely on the model's own knowledge base to answer questions. It cannot recognize newer things, such as Apple's mascot "Finder-chan" launched this year.

Just yesterday, Xiaokang Chen, a researcher on the DeepSeek multimodal team, posted on X, "Now, we see you."  The post, accompanied by a comparison image of DeepSeek's whale mascot from "blindfolded" to "open-eyed," was widely interpreted as a teaser for the upcoming launch of a new multimodal model.

Apple's iOS 27 is rumored to be a major upgrade to the Photos app.

According to Bloomberg, Apple is making a major upgrade to the built-in photo editing capabilities of iPhones, iPads, and Macs. Based on the Apple Intelligence platform, it plans to launch a brand-new suite of AI image editing tools in iOS 27, iPadOS 27, and macOS 27, which will be released in June this year.

According to reporter Mark Gurman, the new feature will add an "Apple Intelligence Tools" section to the Photos app's editing interface, which includes four tools: Extend, Enhance, Reframe, and Clean Up.

  • "Expanding" allows users to generate additional image content outside the original image borders or automatically fill the surrounding scene. Users can control the direction and extent of the expansion by dragging the edge of the image with their fingers.
  • "Enhancement" utilizes AI to automatically optimize color, lighting, and overall image quality;
  • "Recomposition" is primarily for spatial photos, allowing users to adjust the perspective after taking the picture;
  • The "AI Elimination" feature, which already exists in the current version of iOS 26 and supports removing specified objects from images, will continue to be available in iOS 27.

 Related reading: iOS 27 pushes AI photo editing; Apple is also starting to feel AI anxiety.

With a new round of financing on the way, Anthropic's valuation may exceed $900 billion.

According to Bloomberg, Anthropic is considering a new round of funding with a potential valuation of over $900 billion. If successful, it would surpass OpenAI to become the world's most valuable AI startup.

Sources familiar with the matter revealed that investors have already made offers to Anthropic, proposing valuations more than double its current valuation. Discussions are currently in the very early stages, and the company has not yet accepted any offers.

As for existing shareholders, Google has committed to invest $10 billion in Anthropic at a valuation of $350 billion, and plans to invest up to $30 billion more after the company achieves certain performance targets; Amazon has also invested $5 billion at a valuation of $350 billion, and intends to inject another $20 billion over time.

Seeking a comprehensive response to car manufacturing: the "Huawei model" – asset-light, self-developed chips, and betting on the global high-net-worth population.

According to Jiemian News, Ma Junye, president of the Dreame Auto project, revealed in an interview with the media in San Francisco yesterday that the Dreame Auto team actually started preparations as early as 2021 to 2022, around the same time as Xiaomi Auto, and then went through a technical silence period of about three years.

The team currently has over 1,000 members, with R&D personnel accounting for approximately 70%, and is expected to expand to nearly 2,000 members in the second half of this year. CEO Yu Hao is deeply involved in the vehicle's ID styling and product definition, maintaining frequent communication with the team almost daily.

In terms of business model, Seeking adopts a "Huawei model", which involves joint research and development with mature domestic and foreign OEMs and contract manufacturing, without building its own vehicle factory.

At the core technology level, we insist on self-development, covering fully drive-by-wire intelligent chassis, vehicle motor, solid-state power battery, intelligent cockpit, and for the first time confirmed independent planning and development of cockpit and intelligent driving computing chip.

In response to external doubts about the cash flow, Ma Junye stated that he has "reserved enough funds to manufacture cars" and has introduced some social capital and industry funds to share the risks.

Regarding product pricing, Dreame has made it clear that it "will definitely not make cars priced under 200,000 yuan." The starting price for its mass-produced models will be over one million yuan , with some versions exceeding two million yuan. A pure electric coupe will be launched next year, followed by a SUV in the same series. In terms of market strategy, Dreame is targeting approximately 400 million high-net-worth individuals globally, aiming to fill the gap in the overseas market for high-end Chinese new energy vehicles.

He explicitly denied the comparison to "the next LeEco," emphasizing that Dreame is engaged in genuine underlying technology research and development, and is committed to delivering physical products to the global market.

Last year, the total amount of token transactions in my country reached 21.1 trillion, with the daily average exceeding 100 trillion by the end of the year.

According to CCTV News, the "National Data Resources Survey Report (2025)" was officially released yesterday at the 9th Digital China Summit.

The report shows that the national daily token usage increased from over 1 trillion at the beginning of last year to 100 trillion at the end of the year, showing an exponential growth trend; the total token usage for the whole year reached approximately 211 trillion.

Reports indicate that OnePlus and realme have officially merged into a sub-business unit, with Li Bingzhong appointed as general manager.

According to Leifeng.com, OPPO released an internal announcement last night, officially announcing the merger of the OnePlus and Realme brands to form a sub-series business unit.

  • OPPO Senior Vice President Li Bingzhong has been appointed as the General Manager of the Sub-Series Business Unit, responsible for the overall operation of the unit.
  • The marketing and service systems of OnePlus and realme will be integrated into the new business unit, with Xu Qi, the former president of realme's marketing and service, taking over as the head of the marketing and service of the sub-business unit.

At the product level, OPPO simultaneously established a sub-series product center, which includes a domestic product department and an overseas product department, under the unified leadership of Li Jie, reporting directly to Liu Zuohu. Wang Wei, former vice president of realme, was appointed as the deputy general manager of the sub-series product center, reporting to Li Jie.

At the R&D level, the original realme R&D team returned to the group as a whole, and the imaging, hardware and other departments were merged into OPPO, becoming subordinate units of OPPO's various hardware departments.

Alphabet's Q1 net profit surged 81%, and Google Cloud surpassed the $20 billion mark for the first time.

Today, Alphabet, Google's parent company, released its first-quarter earnings report, showing total revenue up 22% year-over-year to $109.9 billion, net profit up 81% year-over-year to $62.6 billion, and diluted earnings per share of $5.11, marking the 11th consecutive quarter of double-digit revenue growth.

  • Google's services revenue grew 16% to $89.6 billion, with Google Search and other revenue growing 19% to $60.4 billion, YouTube advertising revenue growing 11% to $9.9 billion, and subscription, platform and device revenue growing 19% to $12.4 billion.
  • Google Cloud revenue grew 63% year-over-year to $20 billion, breaking the $20 billion mark for the first time, while operating profit increased significantly from $2.2 billion in the same period last year to $6.6 billion;
  • Other business revenue (including Waymo driverless taxis and Wing drone delivery services) was $411 million, a slight decrease year-over-year, while operating loss widened to $2.1 billion;
  • The group's overall operating profit increased by 30% to US$39.7 billion, and the operating profit margin expanded by 2 percentage points to 36.1%.

Google Cloud became the company's primary growth engine for the first time, with revenue from products built on generative AI models growing by nearly 800% year-over-year, and order backlog nearly doubling to over $460 billion quarter-over-quarter; monthly active users of the enterprise version of Gemini increased by 40% quarter-over-quarter, and sales of partner ecosystem seats increased ninefold year-over-year.

Meanwhile, the company's first-party models processed more than 16 billion tokens per minute through direct API calls, an increase of about 60% compared to the previous quarter, and the total number of paid AI subscription users reached 350 million, marking the strongest quarter in its history.

Azure grows by 40%, AI surges by 123%, Microsoft's cloud business accelerates across the board.

Today, Microsoft released its results for the third quarter of fiscal year 2026, ending March 31, 2026:

Revenue reached $82.9 billion, up 18% year-over-year; GAAP net income was $31.8 billion, up 23% year-over-year; non-GAAP net income (excluding the impact of OpenAI investment) was $31.79 billion, up 20% year-over-year.

  • The Productivity and Business Processes segment generated $35 billion in revenue, a 17% year-over-year increase, with Microsoft 365 Business Cloud revenue growing by 19%, Consumer Cloud revenue by 33%, Dynamics 365 by 22%, and LinkedIn by 12%.
  • The Intelligent Cloud segment generated $34.7 billion in revenue, a 30% year-over-year increase, while Azure and other cloud services revenue grew by 40%.
  • The personal computing segment generated $13.2 billion in revenue, a 1% year-over-year decrease, while Xbox content and services revenue declined by 5%.
  • Overall gross profit was $56.1 billion, with a gross margin of approximately 67.6%, slightly lower than 68.7% in the same period last year, mainly due to the drag from rising cloud service infrastructure costs (service costs increased by approximately 28% year-on-year).

The most noteworthy growth engine is the accelerated synergy between AI and cloud computing. Microsoft's AI business annualized revenue exceeded $37 billion, a year-on-year increase of 123%; Microsoft Cloud's overall revenue reached $54.5 billion, a year-on-year increase of 29%.

Meta reported first-quarter revenue of $56.3 billion, with net profit surging 61% year-over-year.

Today, Meta released its first quarter earnings report for this year:

The company achieved revenue of $56.3 billion, a year-on-year increase of 33%; net profit of $26.77 billion, a significant year-on-year increase of 61%, and diluted earnings per share of $10.44.

  • Family of Apps revenue reached $55.91 billion, of which advertising revenue was $55.02 billion, a year-over-year increase of 33%; contributing $26.9 billion in operating profit.
  • Reality Labs reported revenue of $402 million, a slight decrease of 2.4% year-over-year; and an operating loss of $4.028 billion.
  • The overall operating profit margin remained at 41%, the same as the same period last year.

The most noteworthy growth engine this quarter was the increase in both volume and price of advertising: ad impressions increased by 19% year-on-year, and the average ad price increased by 12% year-on-year, both of which jointly drove the rapid growth of advertising revenue.

The company expects total revenue for the second quarter of this year to be between $58 billion and $61 billion, and its full-year total expenditure guidance remains unchanged at $162 billion to $169 billion, but its capital expenditure guidance has been raised to $125 billion to $145 billion, mainly reflecting higher hardware component pricing and data center expansion demand this year .

Soda Music surpasses NetEase Cloud Music to become the third largest music platform in China

Yesterday, analysis firm QuestMobile released its "2026 China Mobile Internet Spring Report," which mentioned a shift in the landscape of the online music app industry:

Kugou Music and QQ Music both have over 200 million monthly active users, continuing to lead the market; Soda Music surpassed NetEase Cloud Music for the first time, entering the top 3; while NetEase Cloud Music was surpassed by Soda Music, ranking fourth.

The report points out that soft drink music accelerated its "breakout" during the Spring Festival season, with downloads surging and successfully penetrating the middle-aged and elderly population and lower-tier markets.

AI word segmenters exhibit "language bias": when asked about Claude in Hindi, the token consumption is more than three times that in English.

Yesterday, AI researcher Aran Komatsuzaki released the results of a comparative review of mainstream large-scale model tokenizers, revealing that tokenizers exhibit "language bias":

When non-English speaking users use the same token type, they actually consume far more tokens than English speaking users, which is equivalent to being quietly charged a "non-English tax".

He translated Rich Sutton's famous paper "The Bitter Lesson" into nine languages ​​and fed them into the tokenizers of six different models. Using the number of tokens in the original English text on the OpenAI word segmentation tool as a baseline, he measured the consumption multiple of each language on different models.

The results showed that when asking the same question in Chinese, Claude consumed 1.71 times the benchmark, while OpenAI consumed only 1.15 times. The situation was even more pronounced for Hindi on Claude, where token consumption exceeded the benchmark by 3.24 times, and for Arabic it reached 2.86 times.

In the comparison of 6 models, Anthropic had the highest "non-English tax," followed by Kimi; Gemini and Qwen had the lowest non-English tax. Komatsuzaki bluntly stated, "Frankly, I didn't expect Claude to be this bad, and the difference is so huge. I believe enterprise clients will be very concerned about this kind of issue."

Komatsuzaki points out that word segmentation efficiency depends on the proportion of each language in the model's training data: English data is abundant, so English words are compressed efficiently; non-English data is scarce, so it can only be segmented into smaller pieces.

For users, increased token consumption directly increases API call costs, leads to longer wait times before model responses, and causes the context window to expire more quickly. His conclusion is: whoever has the largest market share will have more cost-effective tokens.

The first undergraduate program in "Commercial Artificial Intelligence" in China has been approved, and the University of Science and Technology of China will begin enrolling students this fall.

The University of Science and Technology of China (USTC) recently announced that the Ministry of Education has officially approved the establishment of an undergraduate program in "Business Artificial Intelligence" at its School of Science and Technology Business and School of Management. USTC has thus become the first university in China to offer this major, and plans to enroll its first batch of undergraduate students this fall semester.

After nearly two years of multiple rounds of review and evaluation, this major is positioned as non-purely technology-oriented, focusing on the integrated application of AI in business scenarios. Its knowledge system spans the fundamental theories of multiple disciplines, including AI and economics and management, covering cutting-edge topics such as AI-based business model innovation, AI hardware architecture and industrial ecosystem, business intelligence agents, AI-driven science and technology investment, and AI governance.

In terms of training objectives, students will systematically master the core theories of business administration, artificial intelligence, mathematical optimization and computer science, and hone eight core competencies, including business AI integration, intelligent data analysis, human-machine collaborative decision-making, and business system design.

Xiaomi 13 series supports battery upgrade

The latest page on the Xiaomi online store shows that Xiaomi has launched a "battery upgrade" service for two devices in the Xiaomi 13 series. The price is the same as the previous upgrade price for the Xiaomi 13 Ultra, which is 149 yuan for the battery + 40 yuan for labor.

  • Xiaomi Mi 13: 4500mAh → 4850mAh
  • Xiaomi 13 Pro: 4820mAh → 5361mAh

It's worth noting that after the replacement, you'll need to upgrade to OS 3.0.3XX. Earlier this month, Xiaomi launched a "Battery Upgrade" service for the Xiaomi 13 Ultra, supporting an upgrade to a 5500mAh battery capacity (the original is 5000mAh).

 Sam Altman: Token-based pricing will eventually become obsolete; OpenAI aims to be an "intelligence factory."

According to Stratechery, OpenAI CEO Sam Altman recently stated in an interview with tech commentator Ben Thompson that the token-based AI pricing model is unsustainable in the long run, and the industry will eventually shift to a pricing system based on "task completion".

Altman uses the latest GPT-5.5 model as an example to illustrate this judgment: the price of a single token in GPT-5.5 is higher than that of the previous generation GPT-5.4, but the number of tokens consumed to complete the same task is significantly reduced . He believes that users have never really cared about the amount of tokens consumed:

You don't actually care how many tokens the answer uses; you just want to get the job done. You only care about the total price and whether you can access the tokens whenever needed.

Building on this, Altman revised OpenAI's positioning from a "token factory" to an "intelligence factory." Its core goal is to deliver as much intelligence as possible at the lowest possible cost. Users don't need to care whether the underlying model is a large or small model, how many tokens are used, or whether it runs on a GPU or Amazon's self-developed Trainium chip.

Altman also revealed that currently, far more OpenAI customers are requesting additional computing power than are negotiating lower prices. He drew a parallel between AI and traditional utilities like water and electricity, pointing out fundamental differences between the two:

If you consider intelligence as a "utility" (like water and electricity), I don't know of any other utility that makes me feel that—as long as the price is low enough, I'll keep using it and keep using more. There's no such utility.

AWS CEO Matt Garman added that while the unit price of computing power has decreased by several orders of magnitude over the past 30 years, the total amount of computing power sold today is greater than ever before, and the growth logic of AI demand is highly similar.

New products

Xiaomi's Xuanjie O3 chip has been revealed, potentially making it the first foldable screen phone to feature this chip.

According to XimiTime, the Mi Code database recently revealed the specifications of Xiaomi's O3 SoC, which is expected to be the first to be featured in the upcoming Xiaomi MIX Fold 5 (internal codename "lhasa"). The estimated starting price is around $1,500 (approximately RMB 10,200).

Leaked data shows that the Xuanjie O3 has undergone a complete architecture redesign, with the super core frequency increased from 3.89 GHz in the O1 to 4.05 GHz, and it will adopt a three-cluster architecture of "super core (Prime) + performance big core (Titanium) + little core (Little)," removing the big core cluster compared to the previous generation O1.

  • The frequency of the small cores jumped significantly from 1.79 GHz to 3.02 GHz, an increase of about 68%, surpassing the previous generation of large cores at 1.89 GHz;
  • The frequency of the high-performance cores has been slightly increased from 3.39 GHz to 3.42 GHz, with only a limited change.
  • The GPU frequency has been increased from 1.2 GHz to approximately 1.5 GHz, an increase of about 25%.
  • The memory frequency specification remains unchanged at 9600 MT/s.

Starting at 109,800 yuan, the Geely Galaxy M7 Voyager is launched.

Yesterday, the Geely Galaxy M7 Voyager officially launched, offering four configuration versions with prices ranging from 109,800 yuan to 137,800 yuan. In terms of appearance, the new car continues Geely Galaxy's family design language, featuring a continuous LED light strip at the front and an intelligent air intake grille that automatically adjusts according to vehicle speed and cooling needs.

The vehicle measures 4770mm in length, 1905mm in width, and 1685mm in height, with a wheelbase of 2785mm and a trunk capacity of 700 liters. It is available in six colors: Blue, Green, Silver, White, Black, and Gray.

  • Equipped with Thor's hybrid 2.0 technology, the engine has a thermal efficiency of 47.26%, a system peak power of 175kW, and a comprehensive efficiency of 93.1%.
  • All models are equipped with a 29.8kWh battery, providing a pure electric range of 225km and a combined range of 1730km with a full tank of gas and a full charge.
  • The cockpit features the Galaxy Flyme Auto 2 system, a 15.4-inch 2.5K central control screen, a 7nm Longying-1 chip, a 25.6-inch HUD, and 50W air-cooled wireless charging;
  • The standard configuration includes the Qianli Haohan H3 solution, which supports high-speed NOA, automatic on/off ramps, and parking assistance in all scenarios.

 Related reading: Geely Galaxy M7 Voyager launched at 109,800 yuan, equipped with a 30 kWh battery across the entire series, achieving fuel consumption as low as 3.35L per 100km.

XGIMI launches four new flagship products, with the X50 Ultra Max achieving a native contrast ratio of 10000:1.

Last night, XGIMI held its new product launch event in China, introducing four flagship series: X50 Ultra, RS30, AURA3, and MIRA 4K.

X50 Ultra series:

  • Equipped with the self-developed "X-Vision Bionic Optical Engine", it integrates five core hardware components, including the DynaEye stepless bionic aperture, the X-Vision independent image quality chip, and the RGB pure laser light source;
  • The top-of-the-line X50 Ultra Max achieves a native contrast ratio of 10,000:1 and a dynamic black level contrast ratio of 100,000:1, breaking through the "ten thousand level" barrier for native contrast ratio of domestic DLP projectors for the first time;
  • Equipped with the X-Vision independent image quality chip jointly developed with Tsinghua Unigroup, it is the third dedicated image quality chip for projectors in the industry after Sony and JVC;
  • For a limited time, the X50 Ultra Max is priced at 17,999 yuan and the X50 Ultra at 13,999 yuan.

The RS30 series inherits the same technology as the X50 Ultra. The top-of-the-line RS30 Ultra Max features the same DynaEye stepless bionic aperture, a native contrast ratio of 7000:1, a brightness of 5500CVIA/6800ISO, and all models come standard with intelligent bidirectional tilt-shift, allowing projection onto a 100-inch screen from 3 meters away. It offers four configuration options from Pro to Ultra Max.

The AURA3 series laser TVs feature a contrast ratio of 10,000, ultra-short throw for wall-mounted projection, support for rollable screens up to 150 inches, weigh less than 10kg, and cover 99% of the BT.2020 color gamut.

The MIRA 4K series upgrades the previous generation's 1080P to 4K and adds a "Light and Shadow Gallery" function, which includes more than 100 built-in art wallpapers. In standby mode, the wall can be used as a display canvas.

Tencent IMA launches knowledge agent "copilot"

Yesterday, Tencent IMA officially launched the knowledge AI agent "copilot", which allows users to create their own AI agents. It supports five major platforms: Mac, Windows, iOS, Android, and HarmonyOS. Currently, it is an application-based system and will be rolled out gradually according to the order of applications received.

One of copilot's core capabilities is its deeply personalized memory system, which consists of four main modules: copilot settings, user profile, long-term memory, and experience/skills. The AI ​​agent can remember the user's background, habits, and priorities, enabling continuous cross-scenario recall and self-iteration.

In terms of scene awareness, copilot supports hovering in the IMA application as a floating window, automatically sensing the webpage, file, or knowledge base content that the user is currently browsing, and can directly issue processing commands without having to upload additional files.

In terms of the skills ecosystem, copilot initially includes built-in official skills such as knowledge base operations, note-taking, and report generation. The knowledge base skill now supports reading the main text of a file and can summarize information across files.

Step Image Edit 2 released by Step Star: 3.5B parameters surpass 20B-level models.

Step Image Edit 2, a new generation image generation and editing model, was officially released yesterday by Step Star. It emphasizes lightweight design, high quality, and extremely fast response. With only 3.5B parameters, the company claims its actual performance surpasses that of large open-source image editing models ranging from 12B to 20B, generating an image in just 0.5-2 seconds.

On the publicly available academic benchmark KRIS-Bench, Step Image Edit 2 ranks first in the overall ranking of lightweight image editing models.

In terms of capabilities, the model supports image generation and editing, Chinese and English rendering, local editing, visual reasoning, subject consistency and style transfer, and can cover practical application scenarios such as IP creation, poster design, comic generation, portrait beautification, travel photo retouching, and portrait generation.

Motubrain, a embodied intelligence "general brain," was released by BioScience.

Yesterday, Bioscient Technology released Motubrain, a universal world action model, which is positioned as the "universal brain" of embodied intelligent robots. It unifies perception, prediction, and execution in one model, allowing robots to truly understand and act on the physical world.

  • Multi-task generalization: The more tasks a model undertakes, the smarter it becomes, no longer limited to training in a single scenario;
  • Multi-robot adaptation: One model can be adapted to robots of different shapes, breaking the traditional practice of "one robot, one model";
  • Long-range task execution: It can complete complex tasks with more than 10 consecutive actions in one go, instead of just running 2-3 steps at the demo level;
  • Dynamic predictive decision-making: It can anticipate environmental changes and adjust as it is executed.

In authoritative evaluations, Motubrain topped both the WorldArena (world model understanding ability) and RoboTwin 2.0 (robot execution ability) international lists. The latter achieved an average score of 96.0 across 50 complex tasks, making it the only model with an average score exceeding 95.

Tencent's Hunyuan open-source Hy-MT1.5 edge translation model supports 33 languages.

Yesterday, Tencent Hunyuan officially open-sourced Hy-MT1.5, an offline translation model for mobile devices. The compressed size is only 440MB, and it supports 33 languages ​​and 5 dialects and minority languages. It can be used without an internet connection.

Official data shows that the 1.8B parameter model outperforms larger-scale open-source models such as Tower-Plus-72B and Qwen3-32B, as well as mainstream commercial translation APIs such as Microsoft Translator and Doubao Translation, in the Flores-200 Chinese-foreign translation benchmark test.

 Hugging Face: huggingface.co/tencent/Hy-MT1.5-1.8B-1.25bit

Galaxy General-Purpose Open Source Robot Large Model LDA: It can be used even with "garbage data" and gets stronger with practice.

Yesterday, Galaxy General Robotics released the cross-ontology "Hidden World-Motion Basic Model" LDA, with the core algorithm and code being fully open sourced simultaneously. The related paper has been selected for the RSS of a top conference in the robotics field.

The core breakthrough of LDA lies in its first-ever unified and effective utilization of all types of embodied data—covering various data types including virtual-real hybrid, human-machine hybrid, data with and without action labels, and data of varying quality. Experiments show that as the data scale expands from thousands of hours to tens of thousands of hours, the model performance continues to improve steadily; even with the introduction of a large amount of failed data, performance does not decrease but rather increases.

  • Model Architecture : LDA implements the WAM (World-Action Model) framework, unifying the four capabilities of policy generation, forward dynamics prediction, inverse dynamics inference, and visual prediction within the same representation space, forming a complete "perception-decision-feedback" closed loop;
  • Visual representation : Replacing the traditional VAE with the DINO structured latent space effectively filters out appearance interference such as lighting and texture, enabling alignment across ontological dynamics learning. Comparative data shows that the success rate of UWM almost stagnates when scaling from 0.1B to 1B parameters, while LDA continues to improve performance with the same scaling.
  • Action Space : A unified hand-centric action space is proposed, which maps all body actions to the wrist pose changes and hand contact patterns of the end effector, completely decoupling the operation semantics from the specific mechanical structure, so that operations such as gripping, rotating, and inserting can share dynamic laws across the body.

AntBrains Ling-2.6-flash Open Source

Ant Financial officially open-sourced the Ling-2.6-flash weights yesterday. The model has a total of 104B parameters, with only 7.4B activated during each inference, and a context window of 256K.

Ling-2.6-flash introduces a hybrid linear attention mechanism on the Ling 2.0 architecture, upgrading the original GQA attention to a 1:7 MLA + Lightning Linear hybrid architecture, and combining it with a highly sparse MoE design, resulting in inference efficiency that is significantly better than other models of the same level.

In a 4-card H2O environment, the generation speed can reach up to 340 tokens/s, and the peak throughput of prefill and decode is about 4 times that of comparable open source models.

In AI agent-related evaluations, Ling-2.6-flash performed outstandingly, with multiple indicators such as BFCL-V4, TAU2-bench, SWE-bench Verified (61.2%), Claw-Eval, and PinchBench reaching or approaching the state-of-the-art (SOTA) level for the same parameter level.

 Hugging Face: huggingface.co/inclusionAI/Ling-2.6-flash

New consumption

WeChat stickers support posting original images with a resolution of 200 megapixels.

Yesterday, WeChat officially announced that it will support sending and viewing original images in its stickers feature, and has partnered with OPPO to support sharing 200-megapixel ultra-high-definition images in stickers. Currently, this feature is only available on Android devices, and users can experience it by upgrading to version 8.0.71 or above.

JD Art Museum "Unboxing Project" Launched

Yesterday, JD Museum officially announced the launch of its public art project, "Unboxing JD Museum," which will tour Beijing, Suqian, and Shenzhen starting in May.

The project uses JD Express cardboard boxes as a medium, with the core carrier being the "Cardboard Pavilion," a mobile exhibition hall jointly designed by Shenzhen Daxing Jizi and Beijing Small Production. The pavilion will present multimedia content including videos, installations, and sounds.

During its first stop in Beijing, the project launched an online creative challenge with two tracks: handicrafts and AI. It also partnered with JD.com's "Starlight Relay" charity program to exhibit paintings by children from rural areas. In addition, the project conducted questionnaires and interviews with over 100 artists and scholars worldwide, and the results will be preserved in digital archives and publications.

Beautiful

The Devil Wears Prada 2 opens today.

"The Devil Wears Prada 2" officially premiered today and is now showing in theaters nationwide.

The film focuses on the impact of the digital age on traditional fashion media—the once-authoritative magazine "Runway" faces a survival crisis, and "fashion mogul" Miranda (Meryl Streep) and her former assistant Andy (Anne Hathaway) team up again.

DC's "Supergirl" Confirmed for Import

According to Douban Movie, DC Pictures' new superhero film "Supergirl" has been confirmed for release in mainland China, with the release date to be determined.

The story is adapted from the highly acclaimed DC Comics "Supergirl: Tomorrow's Daughter," and tells the story of Kara, who, in order to save her beloved dog Krypton, teams up with a partner she meets by chance, and the two embark on a race against time.

#Welcome to follow iFanr's official WeChat account: iFanr (WeChat ID: ifanr), where more exciting content will be presented to you as soon as possible.

I took the SBTI test, but I’m not an SBskill user.

April is not even halfway over, and your WeChat Moments have probably already been flooded with posts three or four times, right?

The lobster craze hasn't died down yet, and another new nationwide frenzy has emerged online: SBTI —

This is a small tool created by Bilibili up-loader @蛆肉儿串儿 as a parody of the MBTI (Myers-Briggs Type Indicator) personality test. It abandons the serious Jungian psychological classification and adds many localized elements:

After @蛆肉儿串儿 released the video and SBTI test on April 9th, our iFanr editorial team's WeChat Moments were quickly flooded with posts praising leaders, fakes, beauties, and so on.

SBTI original video link: https://www.bilibili.com/video/BV1LpDHByET6

The editor at iFanr also took a test and, as expected, discovered that they are an alcoholic:

According to the uploader, the SBTI was originally designed to persuade a friend to quit drinking. The questions in it have no clear psychological basis, but as long as you choose positive answers to the leading questions about drinking, the personality test results will definitely show that you are an alcoholic.

Image | X @VikingSkirts

After all, in SBTI, the same person can get three completely different "personality" results by taking the test three times. Its whole purpose is just to make you laugh.

Then take a screenshot and post it on WeChat Moments, expanding the idea of ​​making yourself laugh to make everyone laugh.

But in the same week that SBTI dominated headlines, another topic was quietly seeping into everyone's daily lives—

That is "your colleague.skill" .

Note: This is an AI-generated image. Companies that truly develop employee skills don't waste money on employee badges.

In the first half of this week, you may have been bombarded with all sorts of "skills": Trump's ability to draw his own candlestick charts, an ex who remembers every chat log, a boss who uses PUA tactics that are even more ruthless than the real person, and so on.

Not to mention the astonishing Zhang Xuefeng.skill that surfaced a couple of days ago…

Strictly speaking, Skill is equivalent to the "preset" fed into a large language model.

Its principle is similar to writing character prompts like "You are a sweet and soft little cake" in a dialog box, only it is more detailed, richer, and more standardized than handwriting.

Image | X @tuzi_lumaomao

Meanwhile, the process of training (or distilling) this skill can be quite simple.

By feeding the Lark messages, DingTalk documents, and work emails of former colleagues into the distillation tool, an AI clone can be generated that mimics the person's work habits, speaking style, and even the way they shift blame.

Your colleague left, but his skill remained to continue the manual labor.

Accepting labeling, opposing labeling

However, jokes aside, memes aside, the popularity of this "personal skill" model and the SBTI trend that started yesterday are essentially the same phenomenon.

A form of labeling people.

After all, whether it's SBTI, MBTI, simple i/e person classification, or even traditional astrological energy and zodiac fortune, they are all essentially "labeling".

We like to actively categorize our behavioral habits through this act of "labeling ourselves" and use this as a basis to find smaller communities.

This labeling represents my implicit self-identification or expectations, as well as a topic of conversation in social situations.

At the same time, Skill is also a form of labeling.

In late 2025, Anthropic released Claude Skills, and in early 2026, OpenClaw ignited the intelligent agent craze. Skill, as the "skill store" for intelligent agents, began to expand rapidly. The principle is to package a certain professional ability into a folder of reusable modules .

However, in the past we only talked about "skills for creating web pages" or "skills for verifying photo hash values," but the recent emergence of "colleague.skill" marks a clear shift:

People are starting to worry that the definition of Skill is shifting from "what a model can do" to "whose abilities can be packaged."

Since they're all packaged and labeled, why can we accept MBTI and like SBTI, but feel fear and anxiety about our colleague's skill?

I took the SBTI test myself; it's a label I voluntarily adopted, and this act itself carries a secret sense of pleasure.

The test result showed I was an "alcoholic," and I posted it on my WeChat Moments with a laugh. It's a form of self-expression, essentially similar to posting an emo on WeChat Moments about being a paratrooper.

This kind of "self-defined" label is light, because I can change it or not accept it. Today I'm an "alcoholic," tomorrow I might become a "boss," and no one will reassess my worth as a person because of it.

But if the company distills me into a Skill, then the nature of the situation becomes completely different.

"I.skill" is how others exploit me. It's how they refine my accumulated work experience, problem-solving intuition, and tacit understanding with colleagues into a set of parameters, put them into a file of a few hundred KB, and then label it as below the local minimum wage with the note "reusable".

Image | From the Abyss

I'm an idiot, not an idiot. skill

It is undeniable that Skill, as a technological tool, is not biased in itself.

The root of all problems lies in the fact that our use of AI has been forced, alienated, and distorted from "humans using tools" to "humans becoming tools."

After all, the logic of distillation is very simple: standardize non-standard assets (employees) (distill them into skills), and turn the irreplaceable into the replaceable.

In this process, I lost not only a social label to mock myself, but also my right to exist as a professional.

Furthermore, what's even more unsettling than being "refined" is how this path continues to move forward.

The cold, hard law of capital has proven that the nature of exploitation will not change; the only progress capital makes is in the methods and degree of its exploitation.

The Skill system and the entire AI technology field are currently in the process of "transforming from a technological tool into a tool of exploitation".

When your skill profile becomes your digital avatar within the company, HR will start using the "reusability of this skill" to assess your indispensability. Your label changes from an external description to your very existence :

You are no longer "someone who can create beautiful and concise financial statements," but rather "the name contributor of that skill in creating statements."

This sounds like science fiction, but the typical cyberpunk worldview—where a person's market value is determined by organs and implants—is much closer to the possible future worldview of Skill than we'd like to admit.

Because replacing people with skills is not a technological iteration like "cars replacing horse-drawn carriages," but rather a denial of the value of "people as people" themselves.

In workshops and handicrafts, workers use tools; in factories, workers serve machines.

In the former case, the movement of the means of production begins with the worker; in the latter case, the worker follows the movement of the means of production.

…Even reducing labor becomes a means of torment, because machines do not free workers from labor, but rather render their labor meaningless.

Our concerns about Skill, in a narrow sense, stem from the fear that capital will use it as a tool to ruthlessly and without restraint reduce labor costs; in a broader sense, it undermines the "people-centered" concept in modern political theory.

Therefore, people like to use SBTI to mock themselves, labeling themselves as "Mallot" so they can continue to make money off bananas.

But everyone also refuses to be unconsciously or even forced to distill into a skill, becoming a tool that is "not called human".

To put it bluntly, SBTI is my own game, but Skill is someone else's prey.

This may well be the collective sentiment of our time.

After FOMO (Fear of Missing Out), we are now entering a new anxiety driven by LLM, agents, and lobsters – FOBO (Fear of Becoming Obsolete).

FOBO drives us to participate frantically, to spam social media, and to relentlessly test what kind of personality we are; yet, FOBO also makes us suddenly feel alarmed in the dead of night:

Can my experience, skills, judgment, and even the tone of my voice be compressed into a Markdown file and then infinitely copied at zero cost?

This kind of SBTI and FOBO split in modern life reflects the same psychological need from both sides:

On this planet with billions of people, I need to confirm that I am unique, irreplaceable, and cannot be reduced to a string of code.

I can call myself an idiot, but I can't accept being distilled into an idiot skill.

#Welcome to follow iFanr's official WeChat account: iFanr (WeChat ID: ifanr), where more exciting content will be presented to you as soon as possible.

Xiaomi’s latest humanoid robot’s hands can “sweat”.

The most surprising new product from Xiaomi recently isn't a car or a phone, but a humanoid robot that hasn't been officially released yet: the Xiaomi CyberOne V2.

It made its first public appearance at Xiaomi's investor conference the day before yesterday.

He didn't run or jump, nor did he perform a backflip; he simply stood there quietly, like a well-trained staff member, handing out gifts to the guests and shaking hands and giving high-fives.

Xiaomi has not yet released official specifications, but according to online leaks, the Xiaomi CyberOne V2 humanoid robot is 178cm tall and weighs approximately 52kg.

Other parameters include the robot's walking speed, which is approximately 0.98 m/s, and its single-arm lifting capacity, which can support a weight of 3 kg. In comparison, the previously released Unitree H2 robot has a maximum walking speed of 3.3 m/s, a maximum arm load of 15 kg, and a rated load of 7 kg.

The focus of Xiaomi CyberOne V2 is clearly not on walking and weightlifting. The most noteworthy feature this time is the redesigned hand of the Xiaomi robot.

These hands are made to a 1:1 scale of an adult male's hand, with 22-27 degrees of freedom. They can not only perform tasks in delicate industrial scenarios such as quickly tightening screws and turning studs in the palm, but also pinch feathers and touch balloons.

Even more surprisingly, these hands also have human sweat glands.

Other leaks also mention that the Xiaomi CyberOne V2 relies on its underlying emotional AI model to recognize facial expressions and voices, thereby providing appropriate interactive feedback.

However, some American netizens commented that the Xiaomi CyberOne V2 looks too much like the Tesla Optimus, and Musk was right not to reveal any information about the Optimus in advance.

Musk had previously stated that the delay in showcasing the Optimus V3 was to prevent competitors from copying it, and that it should be kept hidden as much as possible before mass production.

Dexterous hands are the hardware bottleneck for robots.

From both a technological and capital market perspective, the development of robotics has been very rapid recently, with almost one embodied intelligence receiving funding every day.

With its impressive footwork, the robot half-marathon broke the human record, breaking the one-hour mark.

However, when it comes to "hand operation," tasks like turning pages or tying shoelaces, which are everyday operations for humans, are still a pipe dream for robots.

The core of embodied intelligence lies in how a robot's brain interacts with the real world through its physical body, and dexterous hands have become the biggest hardware bottleneck in achieving perfect interaction.

Several robotics companies have conducted research on the problem of dexterous hands. BrainCo previously released the BrainCo Revo 3 intelligent dexterous hand, which has 21 degrees of freedom, integrates full-palm haptic and fingertip visual-haptic, and is compatible with the open-source ecosystem.

In the official demonstration video, this hand exceeds the range of human hand movement and covers 33 different grasping gestures. It can solve Rubik's Cubes with both hands, use scissors, and handle prayer beads, among other things.

The reason why dexterous hands are a challenge is that both software and hardware are simultaneously impasses. On the software side, the movements of a human hand need to be redirected to those of a robotic hand; on the hardware side, it is difficult for the small actuators inside the fingers to be powerful, sensitive, and reliable at the same time.

The term "redirection" here can be understood as converting the posture, fingertip trajectory, and contact relationship of a human hand into joint angles and control commands that a robotic hand can execute.

However, the size, number of joints, and range of motion of human hands and robotic hands are not exactly the same. Actions that are natural for humans may become unreachable, clipping through, or have incorrect contact points when directly mapped onto a robotic hand.

In terms of hardware, leg joints typically have more space, allowing for motors with larger radii and higher torque densities, making it easier to adopt low reduction ratios or quasi-direct drive solutions. For example, a 6:1 reduction ratio means that for every 6 revolutions of the motor, the output shaft rotates 1 revolution; the speed decreases, but the output torque increases.

▲Leg motor (gear ratio: 6) and finger (gear ratio: 288). Torque scales with r³.

Fingers don't have that kind of space. The motor must be shrunk to a size that can fit inside a finger joint, and under geometrically similar conditions, the motor torque roughly decreases with the cube of the characteristic length. If the linear size is reduced to 1/10, the torque may only be on the order of 1/1000 of the original.

When torque is insufficient, a common approach is to compensate with a higher reduction ratio, such as 100:1, 200:1, or even 288:1.

The cost of a high reduction ratio is also direct: friction, backlash, efficiency loss, and reflected inertia all become more difficult to manage. A finger that is very nimble in simulation may become stiff and blunt in reality, lacking smoothness upon contact, making fine manipulation difficult.

According to a previous article by Xiaomi Technology exploring the full-palm haptic bionic hand, in order to fully reuse human data, Xiaomi has also made significant strides in reconstructing the bionic hand of CyberOne V2.

1:1 Ultimate Biomimicry: The bionic hand's volume has been significantly reduced by 60%, with dimensions identical to an adult male hand. Simultaneously, it boasts 64% more degrees of freedom, possessing 22-27 degrees of freedom (DoF), with reachability and inertia distribution nearly matching those of a real human hand.

Full palm tactile coverage: If a robot's vision is obstructed, it essentially cannot function properly. Xiaomi has introduced a tactile glove solution, increasing the coverage area of ​​the tactile sensors across the entire palm to 8200 square millimeters. Humans can wear it for prototyping, and the robot can perfectly inherit the "feel" of the touch.

150,000 Durable Saw Cycles: In a lab or demonstration video, squeezing a cup is easy, but in a factory, after 10,000 consecutive screw-driving cycles, a robot's tendons, springs, and sleeves will break. Xiaomi's bionic hand has currently surpassed 150,000 cycles in actual grasping applications.

The most unique detail is the "sweat glands" of the dexterous hand.

To achieve these highly dexterous hands, Xiaomi also had to cram various motors into the robot's single forearm.

In practical applications, a single motor can generate over 100W of power, of which 30W is directly converted into waste heat, easily burning out the wiring. In the confined space without a large external fan, they found inspiration in the human process of "sweating to dissipate heat."

Xiaomi used metal 3D printing to create a miniature liquid-cooled circulation channel within a compact forearm structure. A micro-pump transfers heat, and then cooling is achieved through water evaporation.

In actual testing, this biomimetic sweat gland system only needs to evaporate 0.5mL of water per minute to provide approximately 10W of active heat dissipation.

Besides the hands, there is also the robot's brain.

Hardware is iterating, and models are advancing in parallel.

Two months ago, Xiaomi open-sourced Xiaomi-Robotics-0, a VLA (Vision-Language-Motion) model for embodied intelligence.

In an official tweet from Xiaomi Technology, they further open-sourced the complete post-training process on a real device.

The most intuitive data is that, based on the pre-training base, after training on a real device with 20 hours of task data, the Xiaomi-Robotics-0 model can learn the difficult task of "putting the earphones into the earphone case" and can continuously store multiple earphones.

One noteworthy technical detail in this post-training process is the solution to the "laziness effect".

To ensure smooth robot movements, the industry typically employs asynchronous reasoning and "action prefixing" techniques, allowing new actions to transition naturally from the inertia of the previous action. However, this can cause AI to become "lazy": over-relying on motion inertia and selectively ignoring real-time visual feedback from cameras.

Xiaomi used three mechanisms to combat this problem: adaptive weighted loss, Λ-type attention mask, and prefix action random occlusion. Simply put, it deliberately creates "incomplete answer" situations for the model during training, forcing it to look at the current visual signal.

The integration of hardware and software capabilities has enabled Xiaomi robots to already be used in car factories. At the self-tapping nut installation station, they have achieved 3 hours of continuous, uninterrupted operation with an installation success rate of up to 90.2%, and can keep up with the high-speed cycle of the production line at 76 seconds.

Robots begin mass delivery

Tesla previously cut off the entire Model S/X production line to make room for robots.

During the Q1 earnings call, Musk announced that the third-generation Optimus V3 is expected to be unveiled in the middle of the year, with production starting in late July to August at the Fremont, California factory, and deliveries to enterprise customers in the second half of 2026, with a planned annual production capacity of 1 million units.

But as Musk admitted in a podcast, fine motor skills were "the most difficult part of the whole project."

Tesla's Optimus is not yet in mass production, while another American humanoid robot company, Figure Robotics, announced today at X that it has expanded its production scale by 24 times, from producing one robot per day to producing one robot per hour.

In their official press release, Figure mentioned that they have delivered more than 350 robots.

For Xiaomi, making robots may not be as quick as Figure, Unitree, or even Tesla, where they can sell a single consumer-grade general-purpose humanoid robot.

However, the direction of CyberOne V2 also reveals that what Xiaomi really wants to solve, besides making the robot run faster and lift heavier, is to make it more like a hand that can actually do work.

▲Video from the official website of the quantitative robotics company, in which Xiaomi led the investment.

After all, whether humanoid robots can enter factories and homes has never been determined by whether they can do somersaults, but by whether they can tighten screws, collect earphones, hand things over, and perform those seemingly simple but most everyday actions.

And this is precisely the closest step that humanoid robots are to large-scale deployment.

Some images are sourced from Xiaomi Technology's official WeChat account, X@niccruzpatane, and https://www.origami-robotics.com/blog/dexterity-deadlocks.html

#Welcome to follow iFanr's official WeChat account: iFanr (WeChat ID: ifanr), where more exciting content will be presented to you as soon as possible.