
AI Talking to Itself and Mega-Bets on OpenAI: What These Two Milestones Mean
In the last 24 hours, two key things have happened in the AI world: researchers demonstrated that models learn better when they “talk to themselves” and SoftBank is negotiating to invest up to $30 billion additional in OpenAI.
Both pieces of news point to the same thing: we are entering a phase where the internal architecture of the models and the scale of capital become the true battleground.
1. AI that “talks to itself”: what it is and why it matters
1.1. The experiment: inner speech + working memory
A team from the Okinawa Institute of Science and Technology (OIST) published a study showing that an AI learns faster and generalizes better when it combines internal self-dialogue (“self-talk” or “mumbling”) with a brain-inspired working memory architecture.
The core idea is simple but powerful:
- The model has multiple “working memory slots”, temporary containers where it stores pieces of information while solving a task.
- Before producing the final answer, the AI generates several steps of inner speech, as if explaining to itself what it is going to do, reusing that working memory.
- This combination improves its ability to adapt to new tasks, multi-task, and solve multi-step problems, such as reversing sequences or regenerating complex patterns.
In tests, models with multiple memory slots generalized better than those with simpler memory, especially in tasks where they had to remember order or regenerate patterns. When they were also forced to “talk to themselves” a number of times before responding, performance rose even more, especially in long and multi-tasking jobs.
1.2. What is changing in model design
This work fits into a broader trend: moving from models that only predict the next word to systems that reason in multiple steps with structured memory.
Some key patterns being reinforced:
- More “cognitive” long-term and working memory: recent reviews propose architectures inspired by human memory, such as SALM (Self-Adaptive Long-term Memory), so that models retain knowledge without forgetting what came before.
- Recursive summarizing and context management: “recursive summarizing” techniques allow a model to maintain long conversations, creating increasingly condensed memory layers.
- Neuro-inspired hybrids: networks that mimic cortico-hippocampal circuits are being explored to mitigate “catastrophic forgetting” in continuous learning.
OIST's contribution is adding another piece: the value of explicit self-dialogue as a mechanism for reasoning and internal planning, not just as an external prompting trick. In product terms, this translates into models that plan themselves before acting, something very aligned with the rise of agents that use tools and execute complex workflows.
1.3. Practical implications for SaaS products and automation
If you build products on top of LLMs, these types of advances have several concrete consequences:
- More reliable agents with less data: the combination of structured memory + self-talk improves the ability to generalize with scarce data, which reduces dependence on giant datasets for each vertical.
- Better performance in multi-step tasks: in complex flows (e.g., an agent that qualifies leads, queries APIs, schedules meetings, and updates the CRM), a model with good “inner speech” can plan and self-correct better.
- Personalization of “reflective” agents: room is opened to configure how many steps of internal reasoning an agent performs according to the risk of the action (e.g., more steps before approving a high discount or a sensitive financial action).
For a CRM/automation, this points to a roadmap where agents:
- Decompose sales/support tasks into internal sub-steps before touching production data.
- Maintain structured memory of conversations and previous decisions to be more consistent over time.
- Are trained or adjusted for different “depths of reflection” according to the client's context.
2. SoftBank, OpenAI, and the new scale of capital in AI
2.1. The operation: up to $30 billion additional
SoftBank is in talks to invest up to $30 billion additional in OpenAI, inside a round that could reach $100 billion and value the company at around $830 billion.
Key points of the operation:
- SoftBank already owns around 11% of OpenAI, following a previous injection of about $41 billion at the end of 2025.
- The new funding points to both software expansion and colossal infrastructure projects, such as Stargate, a data center plan of up to $500 billion for large-scale training and inference.
- SoftBank's declared strategic goal is to position itself “all-in” on AI to compete with other giants and capture a significant portion of the value in the software layer.
In parallel, it has been reported that SoftBank has been divesting from Nvidia to free up capital and redirect it towards OpenAI, reinforcing its bet on “software + own infra” instead of depending on third parties.
2.2. OpenAI as infrastructure, not just an app
This possible round places OpenAI in a different category: from model/app provider to critical infrastructure layer, almost at the level of an “operating system of the AI economy.”
Some signals pointing to that reading:
- The scale of investment (tens or hundreds of billions) is typical of utilities or massive infrastructure deployments (energy, telecommunications), not traditional software.
- Projects like Stargate are described as key for the US to maintain an advantage against China in frontier model training capacity.
- OpenAI is simultaneously negotiating with other giants like Amazon and Nvidia to participate in the round, consolidating an ecosystem of alliances and cross-dependence around its stack.
Strategic reading: the true moat shifts to the combination of capital + chips + energy + model ownership. Companies that do not control at least part of that chain will likely have to compete via specialization, own data, or vertical integration in specific niches.
2.3. The context: Amazon layoffs and “AI-first” reorganization
While mega-rounds are announced, another piece of the board moves: Amazon announced the cut of 16,000 corporate positions globally, in its second major wave of layoffs in three months.
Key data from Amazon's move:
- The cut is part of a process to undo the pandemic-era staff expansion and redirect resources toward AI and data center infrastructure.
- The company has indicated that the reorganization seeks to “eliminate layers,” “reduce bureaucracy,” and increase investment in AI, especially in AWS and strategic products.
- In 2025 it had already executed another round of 14,000 layoffs, and its CAPEX guidance projects up to $125 billion in 2026, with a heavy weight on AI.
The narrative emerging is clear: big tech is exchanging people for compute, optimizing human structures to free up resources toward models, chips, and data centers.
2.4. What all this means for the rest of the ecosystem
For startups, SaaS, and builders, the SoftBank–OpenAI–Amazon combo has several implications:
- The “commodity” layer becomes more expensive but standardized: training own frontier models will be unfeasible for most; the rational move is to build on top of existing APIs and models, capturing value in UX, niche, workflow, and proprietary data.
- Pressure for real differentiation increases: if everyone can use GPT/Gemini/Qwen, the differentiator will not be in “having AI,” but in how you orchestrate it within critical business processes.
- More opportunities in “secondary” infra: agent observability tools, prompt security, compliance, AI decision auditing, inference cost optimization, etc., will be increasingly necessary.
For markets like Latin America, where many SMEs have not yet made the jump, this opens a window:
- You can package these frontier AI capabilities into accessible solutions (CRM, marketing automation, sales agents) without having to assume the brutal CAPEX of training models.
- The business story becomes very concrete: “at Conecto we use these models and deliver them to you as a resolved flow for sales, support, operations.”
2.5. How to turn this story into powerful content
This second story has a perfect angle to combine strategy and storytelling: “AI is no longer a feature; it is an infrastructure that requires country-level investments.”
Focus ideas for an article or post:
- “SoftBank wants to put another 30 billion into OpenAI. That’s not a round, it’s a bet that AI will become the new digital power grid.”
- “Meanwhile, Amazon lays off 16,000 people to free up budget toward AI and infrastructure. The message is clear: companies that don't reorganize their structure around AI will fall behind.”
- “For an SME or a SaaS, the smart move is not to replicate this, but to ride on that infrastructure and specialize in concrete customer problems.”
3. Connection between both points: from “self-talk” to “all-in capital”
The interesting thing about looking together at the OIST study and the OpenAI/SoftBank mega-round is that they point to the two layers of the current AI wave:
- In the technical layer, we are moving toward models that reason internally, with memory and self-dialogue, getting closer to human cognitive functioning.
- In the economic layer, the sector is consolidating as extremely high CAPEX infrastructure, where only a few players will finance the training of frontier models.
For anyone building products on top of AI, the strategic conclusion is clear:
- You don't need to train the next frontier model, but you do need to understand how these models think inside (memory, self-talk, agents) to exploit them to the fullest.
- You must design your business as “AI-native”, assuming that compute cost and access to models will increasingly look like paying for electricity or cloud, and that your competitive advantage will be in flow, data, and user experience.
This is the time to adjust roadmaps: build agents that think in multiple steps, rely on memory, and are deeply integrated into specific business processes, on top of an AI infrastructure that others are financing at a historic scale.