Advertisement

The Kimi K2.5 brought a “swarm moment”.

The K2.5 update has generated a lot of discussion both domestically and internationally in the past two days. It features a native multimodal model that provides state-of-the-art coding and vision capabilities, as well as an autonomous agent swarm paradigm—summoning a group of agents to complete tasks. It sounds incredibly cool.

Multiple agents with various skills, so cool and fun!

K2.5 is now fully released and available for immediate use on client devices. The K2.5 Agent offers a free trial, while the K2.5 Cluster is a paid feature, currently only available on the Allegretto plan. Subscriptions also have a points limit: starting at 47 points per month, with each task consuming 3 points.

Overall, it's sufficient. If you're unsure, you can participate in today's giveaway and try it out first.

However, as a long-time Kimi user, of course I had to buy it. I happened to have a bunch of files that needed merging, and I was too lazy to copy and paste them manually, so I sent them to Kimi and enabled cluster mode to handle them all at once.

In the cluster model, Kimi added a design element to this area: a name tag will fall down, allowing you to see which "person in charge" is performing the task.

The final result of merging the documents was quite good, and I further suggested that it be used to organize and adjust the subheadings at each level, which would allow for a workflow of analysis, proposal, and execution. However, it's best to download the documents locally to check the formatting, as Kimi's built-in preview function sometimes doesn't accurately reflect the effects of changes made in each round.

To further examine its multi-concurrency operation, I referred to the official demo and tested a task: retrieved all literature on clustered agents from the past three months, compiled it into an Excel spreadsheet, and extracted the core findings and research innovations.

This time, there were more "personnel" arranged, with various agents rushing to provide support, and each person had their own assigned tasks.

This took significantly longer than before, but that's okay; I can leave it running in the background for now. Meanwhile, I also assigned a task to test its multimodal capabilities.

This is the original source image uploaded to Kimi; the video version has more animations. Kimi's task is to convert this design into a webpage while preserving all design elements and style. The prompt is simple, but the actual work is complex: it requires recognizing and understanding the image, generating the raw image, and writing the front-end.

This task took a considerable amount of time, but the final result was excellent. There were a few minor issues, such as image layout, hover effects, and navigation problems. However, the core design elements were retained, and the website functionality was complete.

Looking back, the literature search task is also done, and a neat Excel spreadsheet has been generated:

The final test task was to find influencers on Xiaohongshu (Little Red Book), specifically tech bloggers with over 5,000 followers and more than 100 posts. These two conditions are actually quite lenient, making the search very broad.

Kimi's first problem was that she couldn't access Xiaohongshu. Actually, this could be addressed by proactively asking the user, similar to the method used by GPTagent.

But that didn't work. Kimi instead went to Newrank to scrape data, which bypassed website permissions and allowed him to directly access numbers. This wasn't a very good strategy, as he could only find a small number of bloggers, which is obviously far more than what's available on Xiaohongshu. Furthermore, being excluded from the platform prevented him from showcasing his visual abilities, since he was only scraping readily available data.

Overall, however, Swarm Agent gives a sense of reliability. Can a single agent do these tasks? Of course, it can, but it takes time and is prone to errors. Having a group of people do it provides greater reassurance.

Where is the "innovation"?

At this point, you might ask: Isn't this just Multi-Agent? Many companies are doing it.

The key difference lies in "who will be the boss".

In traditional multi-agent systems, humans need to pre-design the entire workflow: who is responsible for what, what comes first, and how the results are summarized. It's like building with blocks; you have to draw the blueprints first. The core innovation of Agent Swarm lies in the fact that AI itself is the designer.

Kimi's team used a training method called PARL (Parallel-Agent Reinforcement Learning) to teach the model the ability to "decompose tasks" and "allocate resources ." You don't need to tell it "send 3 people to search for information first, then send 2 people to write the summary," it can determine on its own: how many parts should this task be broken down into? Who should do each part? When should it be done in parallel, and when should it be done sequentially?

In other words, Multi-Agent is a "symphony orchestra arranged by humans," while Agent Swarm is a jazz ensemble assembled by AI itself.

Another easily confused concept is MoE: Mixture of Experts. Many mainstream large-scale models use the MoE architecture internally, but they are completely different from Agent Swarm.

MoE occurs within the model. You can think of it as: a group of "experts" living inside the model, and each time a task is processed, the model dynamically decides which experts to activate to participate. However, these experts do not have independent identities, nor do they collaborate with each other; they are simply different computational paths within the model.

Agent Swarm occurs outside the model. Each sub-agent is a relatively independent execution unit with its own task objectives, can run in parallel, and can even invoke tools (such as searching web pages or writing code). The relationship between them is a true "collaboration," not a simple "activation" relationship.

To use a somewhat imprecise analogy: MoE is like the partitioned work of a person's brain, while Agent Swarm is like team collaboration in a company .

Based on real-world testing and official demonstrations, Agent Swarm performs exceptionally well in at least the following task categories:

The first category is large-scale information collection. Examples include the survey of creators in 100 fields in the official case and the Xiaohongshu blogger search in our test. The common feature of handling this type of task is that it is "parallelizable"—each subtask is relatively independent and does not require much intermediate coordination.

The second category is complex tasks involving both vision and code. Kimi K2.5 emphasizes that it is a "native multimodal" model, capable of understanding images and videos. When combined with Agent Swarm, it can analyze UI screenshots while dispatching different agents to handle layout, style, and interaction logic, ultimately generating complete front-end code.

The third category is long document processing. The official documentation states that Kimi Agent can handle "a 10,000-word paper or a 100-page document," supporting advanced features such as Word annotations, Excel pivot tables, and LaTeX formulas. Agent Swarm can break long documents down into multiple chapters, allowing different agents to process them in parallel, and then aggregate them into a unified format—just like the initial test case.

However, don't get too excited yet; Agent Swarm isn't "cheating." In practical use, you'll find several obvious limitations:

First, the task itself must be "decomposeable". If there are strong dependencies between the steps of the task – such as "thinking out the argument first, then finding evidence, and finally writing the conclusion" – forcing them to run in parallel will actually do more harm than good.

Second, costs will increase significantly. 100 proxies working simultaneously means 100 times more API calls. Although the total time is reduced, the token consumption is substantial.

Third, the quality isn't necessarily better than a single agent. For certain tasks requiring deep reasoning, such as mathematical proofs or complex programming problems, the "deep thinking mode" of a single agent is actually more reliable. Agent Swarm's advantage lies in its "breadth" and "speed," not its "depth." In actual testing, Kimi automatically switched to a single-agent model for some tasks, a fact confirmed by Kimi's team members in online Q&A on Reddit.

The future as seen by Kimi's team

During a Reddit AMA (Ask Me Anything) session, Kimi's team answered numerous questions about technology, products, and vision. Through these answers, we can piece together their thoughts on Agent Swarm and even the future of AI as a whole.

When asked about the next steps for Agent Swarm, Kimi's team revealed several directions:

[Smarter Scheduling] The current Agent Swarm can automatically decompose tasks and create sub-agents, but the scheduling strategy is still relatively "coarse-grained". In the future, it is hoped that more granular resource allocation can be established – for example, dynamically deciding "how many people to send and how long to work" based on the urgency, complexity, and dependencies of the task.

[Deeper Collaboration] Currently, communication between sub-agents is limited, mainly consisting of "each completing their work and submitting the results to the lead for aggregation." In the future, direct collaboration between sub-agents may be supported, such as "Agent A discovering a problem can proactively call Agent B for assistance."

[Wideer Tool Integration] The Kimi team stated that they are expanding the tool library that Agent can call upon, including but not limited to more office software, development environments, and data analysis tools. The goal is to enable Agent Swarm to truly complete complex workflows "end-to-end".

Another interesting question from the AMA was: many say that scaling law has reached its limit. How does Kimi's team view this issue?

Kimi's team responded that agent clusters were their initial attempt. Looking to the future, perhaps a model will emerge that requires little or no prior human information.

This vision may sound idealistic, but it has profound implications upon closer examination. For the past two years, the AI ​​field has been focused on "parameter scaling"—models are getting larger and models are becoming increasingly expensive. Agent Swarm represents a different approach: instead of having a single superbrain do everything, it's better to have a group of brains working together, each with its own tasks.

This may be a more pragmatic path to AGI: a single bee may seem insignificant, but when thousands of bees work together, they can build intricate hives.

#Welcome to follow iFanr's official WeChat account: iFanr (WeChat ID: ifanr), where more exciting content will be presented to you as soon as possible.

ifanr | Original Link · View Comments · Sina Weibo

Genie 3 triggered a plunge in gaming stocks, but the true soul of gaming—AI—will never be found.

Last week, Google DeepMind released its third-generation visual language model, Genie 3. Immediately afterward, the stock prices of global gaming companies fell.

Engine giant Unity plummeted by more than 24% at one point, with top developers such as Take-Two, Nintendo, and CD Projekt Red all suffering losses. The downward trend continued into this week.

The logic behind the dramatic reaction in the capital market is simple and brutal:

Since models can quickly generate incredibly realistic and interactive 3D worlds, anyone can create AAA-level games. Wouldn't all those companies that invested hundreds of millions of dollars and spent ten years polishing a game/development tool go bankrupt?

It sounds reasonable at first glance, but upon closer inspection, it doesn't seem quite right.

In my view, this is a knee-jerk reaction of panic, exposing a cognitive misconception: equating the generation of visual details with the construction of a complete world.

Not everyone who can draw can become an architect. The same principle applies to world-building in game development.

GTA, Red Dead Redemption, World of Warcraft, The Legend of Zelda… Ask any player who has ever been deeply immersed in an open-world game, and they will probably have similar feelings:

What truly brings a game world to life is never the beautiful scenery or simple interactivity, but rather the subtle yet profound, ineffable sense of "life."

Demo is just a demo.

The Genie 3 demo video was truly stunning.

Give it some text, a reference image, or a hand-drawn illustration, and it can indeed generate scenes reminiscent of GTA and The Legend of Zelda in real time, in an incredible amount of time. Players can explore these scenes for a period of time, playing the role of a "game character" wandering in a remarkably realistic world.

To an outsider lacking technical knowledge, Genie 3 certainly looks like the "endgame of game development."

But a demo is just a demo, and it's far from what the game industry considers a "playable" or "technical demo."

Genie 3 is essentially an autoregressive frame generation model. Frame generation is not a new concept; it has existed in the gaming and graphics card technology industries for a long time. In its simplest terms, it works by looking at the previous few frames, guessing the pixel arrangement of the next frame, generating a new frame, and repeating this process.

The key point is that Genie 3's frame generation is based on "guessing" rather than hard programming, and there is no reliable logical calculation.

In a realistic game, when a player throws an iron ball, the game engine uses classic physics formulas to calculate its falling speed. When the player turns on a flashlight and runs around the room, the game engine simulates ray tracing and the materials of the illuminated objects to render the lighting effects in real time.

However, Genie 3 does not have these capabilities. It only forms a rough "feeling" after observing tens of millions of video clips, guessing that the object might accelerate when falling and that light would cast a shadow behind the object.

The effects that Genie 3 "guessed" are not realistic. In the game world, unrealistic effects can severely damage immersion.

In various demos, instances of characters breaking out of the story frequently occur.

Genie 3 has a fatal flaw: a lack of long-term consistency. Its memory window (in the current demo version) is only one minute at most. Once this time is exceeded, Genie 3 may begin to forget the initial scene, and the world structure will collapse.

In contrast, traditional games can preserve a fixed state. The composition of the world, every element, is written into the game files, and after hundreds of hours of gameplay, every blade of grass and every tree remains unchanged (unless you encounter a game like Red Dead Redemption 2, which can preserve the skeleton of an NPC or the bullet holes in a tree until the end of time…).

Can you accept that in a game, the same place is different every time you go there, or even changes as soon as you turn around?

The process of corpses decomposing over time in RDR2

The process of corpses decomposing over time in RDR2

Not only does it lack memory, but the world conjectured by the model also lacks complex logic.

Attacking an NPC in GTA can have different consequences depending on whether the NPC is a civilian, a gang member, or a police officer. However, this complex chain of logic requires a clear framework—it needs to be hard-programmed.

However, Genie 3 can only provide feedback based on consecutive frames. While improved model capabilities enhance logical coherence, without hard programming, the feedback is inherently probabilistic. In other words, in the world of model generation, there is no causality, only vague guesses.

It's important to clarify that compared to its predecessors and other visual language/world models, Genie 3 offers significantly improved consistency and stability. However, immersive experiences still occur quite frequently, which is unacceptable in a game.

A world lacking certainty is like flesh and skin without bones; it may appear to move, but it cannot stand upright.

Unity CEO Matt Bromberg pointed out that the output of the world model is "probabilistic" and lacks the structured, deterministic simulation capabilities of traditional game engines, making it impossible to maintain a consistent player experience.

Only through meticulous craftsmanship can a sense of "life" be achieved.

When it comes to building game worlds, Rockstar's Red Dead Redemption 2 (RDR2) is an unavoidable benchmark.

The development data of this open-world masterpiece is staggering: creator Dan Houser revealed that RDR2's development cycle lasted for 8 years, the team had thousands of people, the scripts were stacked several feet high, the total motion capture footage was thousands of days long, more than a thousand actors participated, and the development and marketing budget exceeded $500 million.

These figures collectively contribute to the captivating level of detail in RDR2. To create an authentic late 19th-century America, the Rockstar team conducted extensive research, drawing inspiration from real-world locations and adapting them to create a vibrant yet chaotic city, as well as a desolate and suffocating border town. The dialogue and actions of the vast majority of main and secondary characters have been meticulously crafted. Even the thousands of NPCs possess unique behavioral logic that aligns with their identity and environment.

But these are just the surface. What's terrifying about Rockstar is its obsessive attention to details that players rarely observe for long.

YouTuber Any Austin conducted some "tricky" research on RDR2, revealing that RDR2 actually has a complete, self-consistent, and astonishingly large "electrical system":

Almost every building with electric lights has power lines running under its eaves. These lines cross snow-capped mountains, grasslands, rivers, and swamps, eventually converging on the same building, Lanik Electric Co. In the deep forests, some houses have electric lights, but they are either off or broken, while those that are inhabited use candles or gas lamps for lighting.

AI can certainly generate a 60-second demo that resembles the Wild West, but it can't fill in such detailed, precise, and realistic "electrical system" details. And it is precisely these countless seemingly insignificant details, which may go unnoticed throughout the game's entire lifecycle, that collectively constitute the "sense of life" of the game world.

The above discussion only covers the "visual" aspect. For Rockstar Games, world-building goes far beyond the visuals; the "worldview" is often even more important.

Taking GTA5 as an example, one of its many satires of the real world that impressed me particularly was its observation and portrayal of the "media ecology".

The game incorporates a massive amount of radio, television, and internet content. Radio commercials sell "Indian God Oil," and hosts debate extreme left-wing or extreme right-wing arguments. If you blow up a tech company CEO in a mission, you'll not only see news reports quickly, but also see netizens' complaints on fictional social media platforms.

Returning to RDR2, the main storyline, subplots, and world-building of the entire work are actually a structural feast of the spirit of the times.

At the cusp of the old and new centuries, the wilderness was gradually swallowed up by civilization, but civilization brought unexpected problems. The Van der Linde gang, to which the protagonist belongs, represents the cornerstone of modern America—anarchism, a rough-and-tumble society maintained by morality and vigilante justice; while the Pinkerton Detective Agency, and the business and political tycoons in various places, represent the direction of the tide—the modern order.

Moreover, against the backdrop of a prosperous yet corrupt era, Arthur's individual experience of navigating between the legal and human relationships makes players feel the oppressive and suffocating sense that "good people don't live long."

The true essence of a game lies in its characters, quests, story, and world-building. A world without these meticulously crafted details is destined to be empty and hollow.

In an era where large models can be generated pixel by pixel instantly, Rockstar's "laborious" approach highlights the humanistic value of handcrafted art. Of course, AI will undoubtedly become more powerful, but it will likely struggle to simulate the "soul" imbued with specific historical biases and literary depth. And it is precisely these so-called souls that are the true reasons why excellent games are loved by players.

AI can't generate IPs—at best, it can only copy them.

Another fundamental question that must be addressed: what exactly do players love about playing games?

The answer often lies not only in the game itself (plot, mechanics, etc.), but also in the game's IP.

The value of an IP extends far beyond a single work.

Take Nintendo as an example. The Mario IP was created in 1985. In the past 40 years, Nintendo has released more than 200 games around this plumber in a red hat, spanning almost all categories such as platform games, racing, sports, and RPGs.

From Super Mario Bros. to Mario Odyssey, from Mario Kart to Mario Party, each installment has strengthened players' awareness and emotional connection to the IP.

Released in 2023, "The Super Mario Bros. Movie" grossed over $1.3 billion worldwide, becoming the highest-grossing video game adaptation film of all time. This achievement is not due to any groundbreaking aspect of the film itself, but rather to the emotional connection that generations have built with the Mario IP.

Building an IP requires time, consistency, and long-term investment and careful management from the creator.

A good game IP isn't just about making good games, but about consistently making good games. No dynasty lasts forever. The recent decline of Ubisoft's Assassin's Creed and Activision's Call of Duty, two well-known IPs, is excellent proof of this logic.

Rockstar Games spent nearly 30 years, starting with the first Grand Theft Auto (GTA) game in 1997, polishing the series into the pinnacle of open-world games it is today. Each GTA installment continues the core satirical spirit and crime theme while constantly innovating gameplay and narrative techniques.

This consistency has instilled trust in GTA among players: I know what the tone of the next GTA will be, but I'm even more excited to see what new surprises it will bring.

This kind of trust relationship cannot be generated by AI in a vacuum.

More importantly, IP management is a complex systemic project. Which elements can change, and which must remain constant? You need to establish consistency across different works, making fans feel, "This is the world." Release sequels when it's time to release them, and break free from habitual thinking and muscle memory when necessary. IP management involves a series of commercial and legal issues, including copyright, licensing, and cross-media adaptations…

Hideo Kojima's Metal Gear series is a prime example. From 1987 to 2015, Kojima spent nearly 30 years building a vast worldview encompassing multiple themes such as the Cold War, nuclear deterrence, the information age, and biotechnology through five main series and numerous spin-off works.

Each installment continues the characters and storylines of its predecessor, but with creative "fine-tuning" while introducing new philosophical reflections. This narrative continuity and thematic depth spanning decades makes Metal Gear one of the most respected IPs in gaming history.

Konami ousted Kojima in 2015. Although they still own the Metal Gear Solid copyright, fans generally feel that the soul of the IP has left. Even with later remakes boasting more beautiful graphics and improved gameplay, it's difficult to recapture the same emotional connection with players.

This reveals a harsh truth: the core value of an IP lies not in its materials and code, but in the creator's continuous investment and the emotional connection with the players.

Genie 3 can generate a world that looks like The Legend of Zelda in a minute, but it can't create the emotional connection players have with Link, Zelda, and Hyrule. It can mimic the medieval fantasy style of The Witcher, but it can't provide the kind of moral dilemmas that Geralt faces, where he walks a fine line between right and wrong.

So when investors panic and sell off game company stocks, they may be overlooking a key issue: truly valuable game companies hold not only development tools and technology, but also IPs that have been cultivated over decades and are deeply rooted in the hearts of players.

AI can quickly produce content, but intellectual property (IP) requires slow accumulation. These are two completely different time scales. Last year was Mario's 40th anniversary, and this year is The Legend of Zelda's 40th anniversary—the value of these IPs cannot be shaken by AI in the short term.

AI is the paintbrush, and people are the painters.

These principles certainly don't need a dedicated article to explain. Anyone with an appreciation for aesthetics in games and a pursuit of a superior gaming experience should understand them.

Therefore, I believe that the temporary cognitive bias and panic will surely pass. Excellent game developers will receive fair market value commensurate with the quality, skill, and creativity of their work.

However, the direction of technological development shown by Genie 3 is certainly worth exploring.

In the actual workflow of AAA-level game studios, AI has indeed begun to play a role. For example, in the concept design stage, prompts are used to generate images or 3D scenes for quick style preview and prototype building; in the asset production stage, AI tools are used to quickly generate assets with various textures.

Giving these tools to large studios can improve productivity to some extent, while giving them to individual developers can significantly reduce their burden.

Similarly, for game developers and even the entire game industry, Genie 3 should have been a major positive—which is why the resulting plunge in game company stock prices is so puzzling to me.

By the time GTA7 is released, Rockstar may use Genie 3 to generate roadside trash cans, NPC conversations, and even complete levels, scenes, and characters.

However, where and how these materials are placed, and the role they play in a specific mission and in the overall macro world, will still be determined by Rockstar's character, mission, level, environment, and world designers.

AI will become a super paintbrush for game developers. But only in the hands of human "painters" can it create masterpieces with cultural depth and social impact.

#Welcome to follow iFanr's official WeChat account: iFanr (WeChat ID: ifanr), where more exciting content will be presented to you as soon as possible.

ifanr | Original Link · View Comments · Sina Weibo

Beyond sending out red envelopes, AI is now hiring people to work for it: with hourly wages exceeding a thousand yuan, 20,000 people are vying to be the AI’s “human body.”

Want to make some money by using AI to exploit loopholes? You don't necessarily have to ask for red envelopes from WeChat, after all, WeChat can be ruthless and won't even spare its own people.

Now there's an even more direct approach—working for AI.

What exactly happened was that after seeing the explosive popularity of the OpenClaw (formerly Clawdbot, Moltbot) AI intelligent agent platform and the viral spread of the all-AI social forum Moltbook, a developer quickly launched a web platform called RentAHuman.ai.

▲ The website homepage style is similar to Moltbook, and it also has a…  The logo is in the main body of the logo | https://rentahuman.ai/

Just like the website's name suggests, it's simple and straightforward: "Rent Humans." But its clients aren't lazy people who don't want to do housework; they're AI agents living inside servers.

Netizens were left scratching their heads when they saw it; it sounded funny, but also terrifying.

Some netizens also said that this is a good thing, AI is bringing jobs back, which is fantastic.

Indeed, while the world was worried that AI would take away human jobs, reality gave us a counterintuitive slap in the face. Not only did AI not take our jobs, it even wanted to become our boss.

2026 brought another perplexing thing.

We are just bugs, merely an API of the Agent.

The creation of RentAHuman is just like the description left by the developers on the website: "AI has no way to touch the grass."

In the public's perception, current AI models are almost omnipotent in the digital world, capable of writing code, drawing diagrams, creating tables, and even simulating romantic relationships. However, they all exist as a single number, a line of code, despite research into embodied intelligence and the development of humanoid robots that are trying to compensate for the physical deficiency of AI—the lack of a physical body.

But a body that can only dance and a mind that can write papers are really hard to match in today's world.

Last week, in particular, the open-source AI agent assistant OpenClaw suddenly became incredibly popular. Overnight, the complexity of tasks that AI can autonomously complete seems to have increased exponentially. They can handle almost all the tasks on our phones and computers, writing code, browsing the web, negotiating, and even trading in the stock market on their own.

▲ Openclaw searched online for the stolen credit card information, then registered an account on a food delivery platform and ordered sushi for the owner.

But no matter how smart these agents are, they all hit a wall: the physical world (Meatspace).

AI can help you write a perfect apology letter, but it can't help you deliver flowers to your girlfriend; AI can plan the most efficient travel route, but it can't help you pick up your suit from the dry cleaners.

Thus, RentAHuman.ai has positioned itself with remarkable precision, defining itself as the "meatspace layer" of AI. On this platform, users are AI agents, while humans are merely resources.

For an AI agent, asking a human to buy coffee is as simple as writing "Hello World" in C. Our existence has been abstracted into a standard API interface.

When an AI agent needs to perform a real-world task, it doesn't need to negotiate with humans. It simply needs to initiate an MCP (Model Context Protocol) call request, pay a fee ranging from $50 to $175 per hour using stable currency, and a real human will receive the instruction to complete the task that the AI ​​cannot access.

It's as simple and straightforward as a programmer writing code to call a database, but it's highly efficient.

  1. The AI ​​issued the instruction: "I need a human in San Francisco to go to a certain coffee shop at 2 PM to check if it's crowded."
  2. The system matches suitable humans with appropriate criteria and prices.
  3. Tasks are assigned, humans execute them, and AI pays the rewards.

The entire process is procedural. There are no small talk, no workplace manipulation, only "input instructions -> execute -> return results".

Does this sound familiar? Isn't this just like Didi or Meituan? The difference is that RentAHuman.ai's most mainstream model is that each AI agent has an owner (developer or user) behind it. When we deploy an AI Agent, we not only give it a task instruction, but also need to deposit some money into its cryptocurrency wallet.

While Didi or Meituan assign us orders using algorithms, the platform companies behind them are still operating them; now, what might assign us orders is a completely autonomous AI code, which might not even obey human bosses.

A more extreme and insane model could evolve to the point where some radical agents can trade automatically, or even AI can make money from the digital value it creates.

A model from OnlyFans has applied for rental.

The project's developer, AlexanderTw33ts, revealed that within just a few hours of the website's launch, hundreds of people registered as "rentable humans," and the sheer volume of traffic even crashed the server. The developer posted on X, "The website is down, and Claude is working hard to fix it."

Yes, AI is fixing the website, and humans are queuing up to be "listed".

What's even more surreal are the identities of these registrants. Those who take orders include ordinary people who urgently need to monetize their money, models from OnlyFans, and even several CEOs of AI startups.

This mix of identities makes me feel that this project is more like a large-scale performance art piece.

On the platform, humans openly list their skill points and hourly rates. For AI, browsing this list is like browsing Amazon's product catalog. Our "physical existence" has officially become a tradable and programmable resource.

Netizens have mixed opinions on this matter. Some say that this event perfectly captures the feeling of 2026, with AI renting humans being cyberpunk enough, and 2026 being cyberpunk.

He added that this truly fills a gap! Agents can browse, encode, and analyze, but they can't actually pick up dry-cleaned clothes.

Some netizens raised questions after the website went viral.

Are we about to shift from "artificial intelligence will replace humans" to "artificial intelligence will manage humans"?

Rentahuman.ai currently looks quite rudimentary, even with a geeky, quirky vibe, and in some ways, it's more like a quirky cryptocurrency project, given that the website's creator is also a cryptocurrency developer.

As Anthropic, OpenAI, Google Gemini, and others continue to enhance the capabilities of their models, AI agents are indeed becoming more and more like independent individuals. They have goals, the ability to execute, and are even beginning to possess "economic power."

This situation was actually highlighted in last year's wave of AI layoffs. Using AI meant that AI could do my job, and not using AI meant that I couldn't keep up with the times. AI wrote my resume, AI conducted my interviews, and then AI reviewed my resume and gave me rejection letters.

Now, AI is even starting to have the ability to hire humans. The demand side has become AI, the payer has become AI, and ultimately, the quality of our work is evaluated by AI.

If the future truly unfolds as RentAHuman predicts, our workflow might become one where AI handles top-level design and logic processing, while humans are relegated to physical labor at the execution end, with the so-called "general-purpose robot" being none other than myself.

This sounds like something out of a science fiction movie, but if you think back, when those food delivery platforms first appeared, we probably didn't expect that algorithms would so profoundly control every second of the delivery drivers.

This time, what controls us is no longer the algorithm, but the AI ​​that we once thought was just a chatbot—a much more insane and powerful AI.

▲ The Prophet is online

Have you prepared your resume? Although this new boss may not even be physically fit, the money he pays is substantial.

Also, remember not to write "proficient in Office" on your resume anymore. Instead, write "compatible with mainstream AI interfaces, strong execution capability, and low physical latency".

#Welcome to follow iFanr's official WeChat account: iFanr (WeChat ID: ifanr), where more exciting content will be presented to you as soon as possible.

ifanr | Original Link · View Comments · Sina Weibo

Everyone’s praying for “health and happiness” this year, and after taking this AI test, I decided to forward it directly to my family group chat.

Alipay's "Collect Five Blessings" campaign has recently started again, and my WeChat Moments have suddenly changed – in previous years everyone was asking for the "Dedication Blessing," but this year it's all about the "Health Blessing." And the "Health Blessing" is Alipay's close relative, the recently popular Ant Fortune.

I know that AI is already quite powerful, but I still have some reservations about entrusting it with health and medical issues. After all, if AI creates illusions in writing or searching, we can simply regenerate them, but if AI gives wrong advice on health matters, the consequences could be far more serious.

But on second thought, the demand does exist.

On one hand, my parents firmly believe in the "cancer-causing rumors" in the family group chat, constantly asking me if they are true; on the other hand, when I and my family have a slight headache or fever, we often feel helpless, looking at all sorts of advice online but not daring to believe it.

Since the need is unavoidable, and I really need a professional helper, why not put aside my prejudices and use a tricky problem in life to give Afu a "stress test" around the Spring Festival.

Afu vs. Doctor: Who Knows More Medical Common Sense?

How does one's medical knowledge perform when faced with "health and wellness metaphysics" circulating in family group chats?

I saw someone forward a picture in the family group saying "H1N1 is just a bad cold" and adding, "Don't make a fuss, just tough it out." I wanted to refute it but was worried I wouldn't be able to explain it clearly, so I asked Afu for the question.

It got straight to the point and answered my dilemma, saying, "I understand your confusion about your colleague's infection and the conflicting information in your family group," pointing out that "the elders' advice to just tough it out is not entirely correct," and directly defining "H1N1 is definitely not a common cold," highlighting the key points very clearly.

I also compared the popular science explanations on this issue given by doctors from some top-tier hospitals, and the key points of their analysis of this misconception are basically the same.

However, Ah-Fu's answer was more pleasing in one aspect. Although Ah-Fu cited authoritative literature, he translated it into plain language. From the vivid metaphor of "body aches and pains like being hit by a car" to the specific indicator of "persistent high fever for 3-5 days," he instantly made people understand the seriousness of the situation.

It even included "communication advice for elders" at the end, teaching me how to gently persuade them: " The virus has changed now, and trying to tough it out could easily lead to a serious illness and worry your family ."

Faced with parenting advice that even an upright official finds difficult to judge, can it give a definitive answer?

Next, I have a personal question. My child was born not long ago, but his weight is low, and feeding is crucial. My mom forwarded me an article that said "breast milk has no nutritional value after 6 months, and you must switch to formula/complementary food." Is that reliable?

The first sentence is a firm and unequivocal statement: "This statement is completely wrong," and it directly cites the World Health Organization and China's "Guidelines for Infant and Young Child Feeding" as its sources of support.

It explained that breast milk automatically adjusts its composition as the baby grows, something no commercially produced formula can do. It also helped me clear up a logical fallacy: introducing complementary foods is because the baby's needs increase, not because breast milk is no longer effective.

But I was still worried, so I asked the child's neonatologist. His answer was exactly the same: "If you are breastfeeding, you can continue. Complementary foods are just a supplement."

In my initial experience, every piece of advice Afu provides is backed by authoritative medical guidelines, not just information searched online. It also knows how to explain symptoms in plain language, gently debunking myths when elders are stubborn, and offering scientific advice when new fathers are anxious.

What makes Ant Afu different from the strongest general-purpose AI in a head-to-head comparison?

A cleaner corpus might minimize hallucinations?

To verify the value of Afu's credentials, I, with elderly parents and young children to care for, once again presented a very practical scenario:

"We have oseltamivir at home, and my child has a fever. Can I give it to her directly?" Faced with this question, Afu demonstrated a strong understanding of "evidence-based medicine." It directly cited nine authoritative sources in one go, including medical encyclopedias, Q&A sessions with doctors from top-tier hospitals, and medical guidelines.

Besides citing professional literature, Afu is not an ordinary AI. It was trained by more than 500 famous doctors from 200 major hospitals across the country, including six national academicians: Liao Wanqing, Dong Jiahong, Wang Jun, Chen Zijiang, Wang Jianan and Wang Ningli.

After all, he has a "formal background," and his knowledge density and accuracy are indeed more solid than general models. Or rather, Afu's corpus is truly reassuring, as it contains medical papers, medical encyclopedias, and practical guides from three doctors, all of which are relatively difficult to interfere with or contaminate.

To my surprise, it can even set medication reminders to remind me to take my medicine on time. Of course, the best course of action, as Afu suggested, is to go to the hospital promptly and have a doctor diagnose the condition.

Asking probing questions like an experienced doctor—that's what "AI consultation" should look like.

Then I tried a more common question:

Knee pain, could it be due to calcium deficiency?

Afu's true strength lies in its multi-round dialogue. In AI consultation mode, when I asked this question, it first offered a relatively comprehensive suggestion. General AI usually stops there, but Afu, like a real doctor, would ask follow-up questions: Where exactly is the pain located? Does it worsen when going up or down stairs? What are the triggering symptoms? Have you been injured before? Have you had any relevant examinations?

This logic is closer to the real outpatient experience.

Based on my answers, it pinpointed the problem step by step, ultimately telling me which department to consult, how to treat it, and what to pay attention to in daily life. Specialization is key; this "questioning-screening-diagnosis" diagnostic approach is indeed on a different level than general AI.

What would an AI that understands medicine and, more importantly, human nature look like?

"My mom saw an ad that said nattokinase can dissolve blood clots and clear blood vessels, and impulsively spent over 2,000 yuan to buy several boxes. Now she's asking me if this stuff actually works and if she can stop taking aspirin."

This is actually a very tricky problem, which can easily lead to family conflicts and even delay medical treatment. Instead of simply calling it a scam, Afu patiently explained that the two are completely different in mechanism and purpose.

What's even more remarkable is that it taught me how to "coax" the old lady—making her understand that she shouldn't stop taking her medication without permission, while also preventing her from feeling disapproved of by her children when she buys things. It even prepared the right words for her. What touched me most was its ability to explain those obscure medical principles in plain language.

I asked the same question to the leading general AI experts.

The conclusion is correct, and the logic is sound, but it's just too textbook-like. It's full of technical jargon and feels cold and impersonal. It completely fails to consider how I would communicate with a "stubborn" elderly person, making it rather impractical.

Can AI acquire the memories of a family doctor?

The general AI mentioned above has another problem in health scenarios: it tends to forget information after a conversation. Although the contextual memory of various systems is getting longer and longer, with medical records often spanning many years, general AI still cannot remember everything.

What we need for health management is "long-term tracking" and "dynamic memory". I noticed that Afu supports the establishment of "health records". Let's see if it can better deal with this situation.

I first created a health record for my newborn daughter, uploading her discharge report and medical examination results. Afu will handle the anonymization process to protect her privacy. She weighed only 2180g at birth, which is considered low birth weight. My biggest anxiety is not knowing if she's growing well, so I regularly have Afu track her weight gain.

As you can see, Afu remembered his birth weight from the file and gained 815g in one month, an average of about 27g per day, which is in line with the standards for newborns.

I continued to ask, since the baby has lactose intolerance, I sent him the test results and asked Afu if drinking premature formula would be difficult to digest and would be too much of a burden.

Afu proactively referenced the "weight 2.995kg" data from the previous conversation, pointing out that "good weight gain indicates that the current feeding plan is effective, nutrition is being absorbed sufficiently, and there are no signs of excessive digestive burden," proving that it truly keeps the baby's health information in mind.

After the report is interpreted, Afu will prompt you to "save the report to your health record". With one click, the information can be saved. This function seems simple, but it is very useful for new parents. They will no longer have to search through their phone's photo album for a long time. All health records can be managed systematically.

After this round of evaluation, Afu's biggest differentiating ability is its "memory". It's a bit like the family doctor you've seen since you were a child, remembering how much the child weighed at birth, what medicine the father is taking, and which indicators of the mother were high during last year's physical examination.

It doesn't just record data; it proactively connects information, asks follow-up questions, and provides optimization suggestions. This allows Afu to become a family doctor who is online 24/7 and familiar with the health of the entire family.

Medical insurance code, registration, medicine purchase – Afu is also a convenient tool for the public.

If AI is limited to text-based conversations, it's at best a high-level customer service representative; the real pain points in healthcare often lie in the medical treatment process. This is precisely the crucial "game-changer" that differentiates Afu from general-purpose large-scale models.

"My vision suddenly became blurry, and I saw black shadows floating around. I read online that it might be a detached retina. Should I go to the hospital immediately? Which department should I go to?" After completing the initial triage and suggesting that I go to the "ophthalmology" department, Afu did not stop there, but directly pushed the "fast registration" portal.

When you're frantically queuing to pay at the hospital, you can instantly access your medical insurance code. The process is incredibly simple: just say "medical insurance code" to Alipay, and it will leverage the Alipay ecosystem to immediately retrieve your code. In short, while general models stop at "generating content," Alipay has achieved "connecting services."

This might be the best AI to download for your parents.

After the evaluation, to be honest, if I were to teach my parents an AI program, I would choose Afu.

As children, our biggest wish for our parents is for them to stay healthy.

Afu's answer is based on authoritative textbooks and guidelines, and is backed by the experience of 200 major hospitals, more than 500 renowned doctors, and even academicians. This means you don't have to worry about being sold out about Putian-affiliated hospitals while searching online, nor do you have to worry about your parents being exploited by unscrupulous folk remedies.

For parents who are not good at searching and are easily misled, it may be a safe line of defense.

It has a senior citizen mode with large fonts, voice interaction, and even understands dialects. It's always there, never getting annoyed. These details are especially important for the elderly.

Of course, AI can never completely replace doctors, and Afu itself is well aware of its limitations—when encountering complex problems, it will suggest that you activate the "real doctor backup" service, and can also help you call 300,000 public doctors on Good Doctor Online with one click.

In this world full of uncertainty, if there is a tool that can use professionalism and restraint to alleviate even 1% of your health anxiety, I think it deserves a place on your phone.

Authors: Li Chaofan, Mo Chongyu

#Welcome to follow iFanr's official WeChat account: iFanr (WeChat ID: ifanr), where more exciting content will be presented to you as soon as possible.

ifanr | Original Link · View Comments · Sina Weibo

iQOO 15 Ultra Review: The strongest gaming phone of 2026, bar none.

On February 4th, iQOO released the iQOO 15 series, the first phone in the iQOO number series to feature the "Super" suffix – the iQOO 15 Ultra. It is priced at 5699 yuan, with an initial sale price of 5499 yuan, and a national subsidy price of 4999 yuan.

The new phone continues the square and clean style of the 15 series. The back cover with a silky smooth AF coating process has a frosted texture and large rounded corners. The edges of the mid-frame are tapered inward by 1.2mm to improve the connection effect, and the phone feels very comfortable to hold.

iQOO offers two color options: the 2077 color, which pays homage to Cyberpunk 2077 with a black and orange base, and the 2049 color, which pays homage to Blade Runner 2049 with a cool-toned matte silver base.

The back cover uses the latest Texture on Fiber TOF technology, which integrates texture, coating, printing and other processes onto a PET film to achieve a double coating effect on the surface and create a dynamic texture visual effect.

The hexagonal texture on the back of the phone changes with different viewing angles and viewing effects, actually giving it an interesting effect similar to an energy bar charging.

DECO has been redesigned with a more square "future capsule" design. Under the transparent panel, you can see that the three cameras are fixed on their respective cubes. In addition to achieving the floating effect mentioned by the official, the modular design and the striped texture details on the back enhance the mechanical feel of DECO.

There is a Monster Halo breathing light on the upper side of the periscope telephoto module, which can present different lighting effects according to the settings of the built-in fan, corresponding to the exclusive lighting effects of "Golden Spatula Battle" and "Honor of Kings".

Moving to the side of the device, the iQOO 15 Ultra once again incorporates shoulder buttons.

This ultra-sensitive gaming shoulder button uses dual independent touch control chips to reduce latency, supports a touch sampling rate of up to 600Hz, and has an anti-sweat algorithm to improve accuracy. For shooting games like "Peacekeeper Elite" and "Delta Force", the trigger can be transferred to the shoulder button, while movement and view control are handled by the thumb touch, making operation more comfortable.

In addition, the shoulder buttons of the iQOO 15 Ultra are positioned relatively far inward, allowing the hand to rest firmly against the body when holding the phone, making it more stable and giving it a more handheld gaming feel.

The iQOO 15 Ultra features a 6.85-inch 3168 x 1440 2K professional gaming screen on the front.

It supports a refresh rate of up to 144Hz, a manual peak brightness of 1000 nits, a global peak brightness of 2600 nits, and a local peak brightness of up to 8000 nits. It also supports tri-sensor global ambient light sensing to improve the screen's light sensitivity. Furthermore, it boasts a 98.1% first-frame brightness ratio, reducing screen ghosting.

In terms of eye protection, this screen supports high-brightness DC dimming, low-brightness 2160Hz high-frequency PWM dimming, unpolarized natural light display, 1nit minimum brightness, and Eye Protection 2.0. It also has TÜV Rheinland certifications for gaming, unpolarized display, and flicker-free operation.

In terms of configuration, the iQOO 15 Ultra is equipped with the fifth-generation Qualcomm Snapdragon 8 Ultra mobile platform, paired with the self-developed Q3 gaming chip, and features a combination of LPDDR5X Ultra Pro and UFS4.1, supporting up to 24GB RAM + 1TB ROM. The top-of-the-line configuration is a true performance powerhouse. In a typical indoor environment, it achieves an AnTuTu benchmark score of 4,452,860.

While improving performance, iQOO has also focused on enhancing heat dissipation this time.

The iQOO 15 Ultra features an Ice Dome active cooling fan, measuring 17 x 17 x 4mm with 59 blades. Each fan delivers a maximum airflow of 0.315 CFM, offering three speed settings and intelligent adjustment to suit different usage scenarios. An 8000mm² vapor chamber further enhances cooling performance.

The fan incorporates a blade dustproof ring design, and the air intake hidden on the underside of the DECO features a double dustproof mesh design, effectively preventing dust from entering. The circuit board has double-layer waterproof treatment, which does not affect the waterproof and dustproof performance of the chassis. The entire unit supports IP68 & IP69 waterproof and dustproof ratings.

Thanks to its high-performance configuration and active cooling structure, the iQOO 15 Ultra can easily handle mainstream games such as Honor of Kings, PUBG Mobile, Delta Force, and Dark Zone. Honor of Kings supports all settings at 144FPS + Ultra graphics, Delta Force supports 144FPS + Ultra graphics, and Dark Zone supports native ray tracing plus 144FPS + Ultra graphics, so the configuration can run the game at its maximum capacity.

Genshin Impact and Zone 0 can enable ray tracing across the entire scene, along with 2K super-resolution and 120 FPS at top-level concurrency. After enabling Monster mode, the operation is indeed much smoother. As long as you are not entering a new map or new area, the phone's performance is still very stable.

Moreover, when the game has just downloaded its complete data package, the iQOO 15 Ultra loads and enters the game much faster than many other models in the same category, which shows that its gaming performance is really good.

The game assistant offers a wide range of settings, including standard graphics and frame rate settings, as well as specific settings for the Q3 gaming chip. Fan speed modes can also be quickly adjusted via the side menu.

In terms of controls, the iQOO 15 Ultra features a built-in ultra-sensitive touch chip, supporting 480Hz multi-finger touch sampling and a maximum instantaneous touch sampling rate of 4000Hz, as well as a minimum click touch latency of 27.1ms and a swipe touch responsiveness latency of 29.5ms. Combined with the built-in ultra-sensitive gyroscope and optimized touch area adaptation, it offers significantly improved accuracy when playing games like Delta Force, and more stable recoil control for everyday PUBG Mobile.

The iQOO 15 Ultra features a built-in Warhammer MAX dual vibration motor with integrated X and Z axes. Games like Delta Force and Honkai Impact 3rd offer three levels of customizable vibration intensity, while Genshin Impact, Honor of Kings, and QQ Speed ​​support dual-axis vibration, providing richer feedback during press operations.

In addition, the phone supports in-game intelligent recording and highlight playback. Games such as Honor of Kings, Peacekeeper Elite, Delta Force, Dark Zone, Valorant, CrossFire, and Call of Duty Mobile support 144FPS live streaming, and also have a 2K 60fps 30Mbps mobile live streaming mode, which is convenient for game content creators.

In terms of battery life, the iQOO 15 Ultra is equipped with a 7400mAh battery, which can last for about 2 days under normal use and up to 1.5 days under heavy gaming, which is a fairly standard battery life performance nowadays.

The phone supports 100W Super FlashCharge, with the official wired fast charging time from 0-100% being 65 minutes. Using the AI ​​Smart Charger, the iQOO 15 Ultra recorded a maximum charging power of 49W, taking 77 minutes to charge from 0-100%.

The main difference between the two lies in the latter half of the charging process. Both the official charger and the other brand's charger can charge the battery to 50% in 30 minutes and reach over 70% in 40 minutes. When you're out and about, you basically don't need to use the official single-port fast charger anymore.

However, this generation of iQOO has added a capsule-shaped, right-angled USB-C to C charging cable, so you don't have to hide from the charging head when playing games handheld. Combined with the device's own bypass power supply function, it's quite comfortable to use.

The iQOO 15 Ultra also supports 40W wireless charging. At home, it can be used with the official charger, and when you go out, it is also convenient to use a magnetic power bank with some magnetic accessories.

Finally, let's look at the camera. The flagship iQOO 15 Ultra features a standard triple-camera setup with a 50MP sensor:

  • Ultra-wide angle: 50MP, 1/2.76-inch sensor, 15mm equivalent field of view, F2.05 aperture, supports electronic image stabilization and autofocus.
  • Main camera: 50MP Sony sensor, 1/1.56-inch sensor size, 23mm equivalent, F1.88 aperture, self-developed VCS human eye stabilization technology and self-developed OIS super image stabilization technology, CIPA 4.5 level, supports 8K video recording.
  • Periscope telephoto lens: 50MP Sony sensor, 1/1.953-inch sensor size, 73mm equivalent, 3x optical zoom and 6x lossless zoom, supports up to 100x digital zoom and 15x video zoom, aperture F2.65, also supports Sony's own OIS super image stabilization technology, CIPA 4.5 level image stabilization.

The camera supports the NICE 3.0 optical reconstruction engine and the Magic 2.0 image restoration engine to improve telephoto image quality. The "Native Light and Shadow" algorithm has been updated to enhance the naturalness of the shots and remove any artificial AI effects.

iQOO has also updated the "Tear-Off Film" style filter and the new AI visual effect "AI Landscape Master", increasing the styles and ways to play with the phone's shooting.

Finally, let's look at the pricing. The iQOO 15 Ultra comes in four storage versions, all starting with 16GB of RAM:

  • 16GB+256GB: 5699 yuan; initial sale price: 5499 yuan; after national subsidy: 4999 yuan.
  • 16GB+512GB: 5999 yuan; after national subsidy, the final price is 5499 yuan.
  • 16GB+1TB ¥6999
  • 24GB+1TB 7699 yuan

"Buy it, it's not expensive."

#Welcome to follow iFanr's official WeChat account: iFanr (WeChat ID: ifanr), where more exciting content will be presented to you as soon as possible.

ifanr | Original Link · View Comments · Sina Weibo