The Great Corporate Pivot: Why ASUS Is Leaving Smartphones for Robotics, and the Privacy Fight Heats Up
Today’s headlines deliver a fascinating duality in the world of Artificial Intelligence. On one hand, we see massive, concrete corporate shifts proving that AI is no longer a peripheral venture but the core focus. On the other, we are reminded, often hilariously, that the underlying technology is far from mature. We are witnessing both breathtaking ambition and humbling failure, sometimes from the very same players.
The most jarring news of the day came from a hardware giant. ASUS chairman Jonney Shih announced that the company is effectively hitting the brakes on new smartphone models, declaring the company is going “all in AI” [https://videocardz.com/newz/asus-goes-all-in-ai-and-stops-new-smartphones-chairman-jonney-shih-confirms]. This isn’t just a minor reallocation of resources; it’s a profound strategic pivot, redirecting R&D focus toward commercial PCs, robotics, and smart glasses. This move is perhaps the clearest signal yet that legacy consumer electronics markets are being cannibalized by the AI revolution. Companies aren’t just adding AI features to old products; they are betting their future on the idea that the next generation of computing interfaces will be fundamentally different, built around embedded intelligence and physical AI devices.
This rush to build the next generation of AI tools, however, is being met by a powerful counter-movement rooted in user privacy. Moxie Marlinspike, the founder of the fiercely privacy-focused messaging app Signal, has unveiled his own alternative to ChatGPT called Confer [https://techcrunch.com/2026/01/18/moxie-marlinspike-has-a-privacy-conscious-alternative-to-chatgpt/]. Confer is designed to offer the powerful large language model experience users crave, but with a critical difference: user conversations cannot be used for training or advertising. The release of Confer is a necessary pushback against the data-hungry models dominating the market. As AI integrates deeper into our daily lives, from drafting emails to structuring our schedules, the market is opening up for solutions that treat personal data as a liability to be protected, not a resource to be mined. This privacy angle will inevitably become a key competitive differentiator against the Google and OpenAI behemoths.
Speaking of those giants, today offered a dose of reality regarding the reliability of deployed AI features. Despite the major strategic moves being made by the big players—moves Axios describes as signaling that the “AI race just entered a new phase” [https://www.axios.com/2026/01/17/chatgpt-ads-claude-gemini-ai-race]—the fundamental issue of hallucination persists. Google’s AI Overview feature provided an extremely puzzling answer when asked a simple temporal question, bizarrely insisting that next year is not 2027 [http://futurism.com/artificial-intelligence/google-ai-overview-year]. This type of nonsensical error highlights the immense chasm between the aspirational marketing of these tools and their real-world stability. The fact that the search engine responsible for organizing the world’s information can stumble so badly on a basic fact is a crucial reminder that we are still dealing with predictive language models, not genuine intelligence.
In sum, the day’s developments show a field undergoing intense gravitational pull. Companies like ASUS are radically shifting resources toward a future dominated by AI hardware, while savvy developers like Marlinspike are creating necessary ethical alternatives to the current surveillance model. Yet, the viral failure of Google’s search-integrated AI keeps everyone grounded: the race may have entered a new phase of intense competition, but the underlying technical challenges are far from solved.
The coming year will not just be defined by who builds the most powerful model, but by who builds the most trustworthy one, both in terms of accuracy and privacy.