Jeetu Patel, President and Chief Product Officer at Cisco, spoke on the sidelines of the World Economic Forum 2026, indicating that the AI industry is maturing after a period of experimentation. With 2025 marking a transition from basic chatbots to more sophisticated agentic AI, 2026 is poised for putting these systems into real-world use.
“2026 will mark the advent of agentic AI production,” Patel stated, mentioning that early iterations of physical AI and expansive world models will also begin to emerge, despite varying stages of development for different methodologies.
According to Patel, this transition is stimulating investment in areas where limitations are becoming increasingly apparent. He pointed out that infrastructure, trust, and data are now the most significant bottlenecks hampering AI growth. The demand for power, computing resources, and network capacity is surging, while security and safety are critical for enterprises to adopt these systems broadly. Data gaps, especially concerning machine-generated data, are also gaining attention. Patel emphasized that these challenges align closely with Cisco’s offerings, strengthening the company’s recent success and positive outlook.
Varun Sivaram, Founder and CEO of Emerald AI, highlighted that power constraints are emerging as a significant risk to AI’s growth. He warned that 2026 could see energy shortages that may severely restrict AI initiatives in nations like the US and India. “They have chips; they require power in 2026,” Sivaram remarked, noting that even with plans for significant data center capacity, only a small fraction can currently be integrated with existing energy grids.
Emerald AI is working to tackle this problem by optimizing how data centers consume electricity. Sivaram mentioned that the Nvidia-backed firm transitioned from the lab to commercial deployment in just 18 months and has introduced what it calls the world’s first power-flexible AI factory in the US. By enabling data centers to dynamically adjust their power usage, they can connect to the grid more expediently without increasing electricity costs for neighboring communities, and effectively leverage existing spare capacity within energy systems.
For businesses facing an increasingly volatile global landscape, AI is being recognized as a vital decision-making resource. Chakri Gottemukkala, Co-Founder and CEO of o9 Solutions, stated that organizations are encountering unprecedented complexity amidst ongoing strain on supply chains and markets. While large language models have captured interest, he noted their influence on enterprise decision-making has been limited thus far. Gottemukkala believes that the next phase lies in merging the accessibility of LLMs with organized enterprise knowledge through what he refers to as neuro-symbolic AI, allowing insights to transcend specialists and engage frontline teams more effectively.
Also Read | Indian IT set for AI-led growth spurt in 2026: Uniphore Chief Umesh Sachdeva
As AI systems become integral to business functions, concerns regarding trust and security are escalating. Jonathan Zanger, Chief Technology Officer at Check Point Software, explained that many AI solutions were developed without security considerations, creating vulnerabilities for attackers to exploit. He noted that organizations are responding to this issue, with boards and CEOs significantly scaling their budgets for AI security. “In 2026, we must double down on investments to securely adopt AI,” Zanger asserted.
Patel echoed this sentiment, emphasizing that although AI was initially utilized to bolster cyber defenses, companies now need to focus on securing AI itself. He argued that enterprise systems should be predictable and reliable, even as models become more complicated. Changes in architecture, partly driven by power constraints, are reshaping AI infrastructure, facilitating multiple data centers to operate as a unified virtual cluster.
The insights shared by industry leaders in Davos highlight a clear consensus: as AI transitions to large-scale implementation, success in 2026 will hinge less on experimentation and more on addressing practical challenges related to power, security, and enterprise integration.
Below is the excerpt of the discussion.
Q: Jeetu, can you start by discussing the mega trends likely to influence the AI landscape in 2026?
Patel: 2025 saw a shift to the second phase of AI, transitioning from chatbots to agentic AI. There has been considerable experimentation, as you noted. In 2026, we will see the production of agentic AI, along with initial manifestations of physical AI and extensive world models. I anticipate witnessing all three phases at differing levels of maturity.
Q: What implications does this have for investments? This trend has notably supported the US economy and global growth. What market opportunities do you see at present?
Patel: Investments will focus on areas facing constraints. We see limitations in infrastructure, trust, and data. The global demand for power, computing, and network capacity is insufficient to meet AI needs, driving significant investment. Trust in these systems is critical; without it, utilization declines. The data gap surrounding machine-generated inputs to AI is another vital area where investment is accelerating.
Q: How does 2026 look for Cisco specifically?
Patel: Interestingly, those three areas correspond directly with our product offerings.
Q: Is that merely coincidence?
Patel: We have experienced remarkable growth this past year, and we expect this positive trend to continue moving forward.
Q: Let’s address the power issue with Varun. What challenge is Emerald AI tackling today?
Sivaram: I believe 2026 will be the year when power constraints severely affect AI initiatives. In America, we’re planning to build 50 gigawatts worth of data centers within three years, but only about 25 can be connected. The same situation exists in India. Comparatively, China will have 400 gigawatts of surplus capacity by 2030. For countries like the US, India, and the UK to remain competitive and advance AI innovation, they must resolve their power-related issues. They possess chips, but they will need power in 2026, which is what Emerald AI aims to address.
Q: How confident are you in solving this issue, and what specific measures are you implementing?
Sivaram: Emerald AI, as mentioned, is an Nvidia-backed startup. In the last 18 months, we transitioned from lab testing to commercial deployment with Nvidia. We’ve introduced the world’s first power-flexible AI factory in Virginia. We develop software that enhances the power flexibility of AI data centers. This means we can dynamically adjust power usage in AI factories or data centers. Today’s power grid is largely underutilized, as it only approaches peak loads infrequently. During such peak times, an Emerald AI-operated data center—like those powered by Nvidia or partners like Oracle—can reduce its power consumption to connect to the grid faster, achieving integration in six months to a year rather than the typical decade. This enables the construction of significantly more data centers in the West or India. Moreover, this approach keeps costs stable for nearby communities. As President Donald Trump emphasized, data centers must not hike bills for everyone. With Emerald AI’s technology, data centers can remain power flexible, keeping expenses low while maximizing the energy network’s efficacy and utilizing 100 gigawatts of unused capacity.
Q: Chakri, let’s turn to you. We’re in a highly volatile and uncertain world, and supply chain resilience is under stress. What are your clients currently seeking?
Gottemukkala: A recent study claimed that 2025 was the most volatile year so far, and 2026 has begun in a similar vein. At o9, we assist companies in making decisions amidst this volatile, complex, uncertain, and ambiguous environment. Increased volatility and complexity lead to more challenging daily decision-making. With regards to AI utilization, despite attempts with LLMs, enterprise decision-making has seen limited success thus far. Future advancements depend on merging the easily accessible models with structured enterprise knowledge, leading to what we’re calling neuro-symbolic AI. This approach combines the neural capabilities of LLMs with the symbolic representation of enterprise knowledge, streamlining decision-making access from executives to frontline teams.
Q: Jonathan, addressing Jeetu’s comments about trust, safety, and security, is this issue receiving sufficient attention?
Zanger: Absolutely, this matter requires significant attention. In recent years, considerable investment and innovation have been directed towards developing AI applications. However, many solutions have emerged without security considerations, and when evaluated from an attacker’s viewpoint, we identify vulnerabilities that threat actors could exploit. For 2026, securing AI adoption must be a priority through increased investment.
Q: Are CEOs currently allocating funds towards this direction?
Zanger: Indeed, we’ve observed a surge in AI security budgets, which is unprecedented. Two years ago, this trend wasn’t as pronounced, but it is now a primary concern for boards and CEOs.
Q: For CEOs, top priorities include trust, safety, and security. Jeetu, as Chief Product Officer, how are you integrating these elements?
Patel: Initially, AI was employed to enhance defenses due to the sophistication of attackers. Now, we must secure AI itself, as models can behave unpredictably, while enterprise applications need consistency. Effective guardrails and validation are crucial. Our investments extend beyond AI for cyber defense to encompass the security of AI itself. Moreover, architectural changes driven by power constraints mean that instead of utilizing a single GPU per model, clusters of GPUs across data centers are increasingly interconnected to act as one virtual ultra-cluster. This transition necessitates a new type of silicon and chips, which we specialize in. This approach enables models to function as a unified virtual cluster, which is essential as they expand and demand for scaling persists.
Q: How scalable is this technology, and how prevalent is its use?
Patel: Hyper-scalers are currently constructing clusters comprising hundreds of thousands of GPUs across various locations. Often, local power availability is insufficient, hence data centers are established where power exists. If power can be accessed from multiple locations, they are networked to function as a single virtual GPU. Additionally, innovations on the power front, as Varun highlighted, play a crucial role. These combined factors prepare the infrastructure for AI.
Watch the accompanying video for the complete discussion.