Systems | Development | Analytics | API | Testing

2026 Data & AI Predictions: What Trends Will Shape the Future?

We recently released our 2026 Confluent Predictions Report, outlining bold ideas and trends that are shaping the future of data, AI, and real-time systems. And stay tuned for an upcoming episode of the Life Is But a Stream web show that will air early in the new year. Join the conversation as host Joseph Morais sits down with Sanjeev Mohan, independent analyst at SanjMo, for an exciting roundtable discussion breaking down those predictions. Are they forecasts? Are they trends? And which ones will matter most as we move forward into 2026?

Move More Agentic Workloads to Production with AI Gateway 3.13

Kong AI Gateway 3.13 moves enterprises from AI experimentation to shipping production-grade agents by unlocking new capabilities focused on agentic security, developer productivity, and resilience, including MCP tool-level access control, expanded provider support, and smarter load balancing.

The Age of AI Connectivity

Kong was born to connect. The world is shifting from connecting cloud services with apps to connecting LLMs through agents. API calls and tokens are moving in tandem; a new unit of intelligence is forming. As AI traffic explodes into hypervolumes, speed is all that matters. The same principles of performance, security, and reliability behind Kong are essential in an agentic world. A new connectivity layer for AI is born.

Operationalizing Agentic AI with Hitachi iQ Studio and NVIDIA Nemotron 3

NVIDIA just announced NVIDIA Nemotron 3, a new family of open models, datasets, and libraries designed to support long-context reasoning and multi-step AI workflows. With the ability to work across enterprise ecosystems, this family of models empowers enterprises to build and deploy reliable multi-agent systems at scale, offering an important set of technologies at a pivotal moment in AI evolution.

7 RAG Evaluation Tools You Must Know

RAG evaluation measures how effectively a system retrieves relevant context and uses it to generate grounded answers. These evaluations detect hallucinations, measure retrieval precision and reveal where pipelines degrade after model updates or knowledge-base changes. Engineers rely on these tools to maintain output quality, prevent regressions, validate prompt and architecture choices and ensure that production answers stay aligned with trusted sources.

Accelerating the API SDLC with SmartBear MCP Server and Swagger MCP Tools

Note: The SmartBear MCP Server is under active development and features may change. Check our GitHub repository for the latest updates and compatibility information. At SmartBear, we show how our MCP Server enables a secure and intelligent bridge between SmartBear platform data and AI-powered development workflows.

Digital Twins Gone Wild: My Unexpected AI Doppelgänger

I recently tried using AI to create a digital twin of myself. I uploaded a photo, expecting a futuristic, slightly improved version of me… and what did I get in return? A picture of Kim Jong Un. Clearly, AI has a sense of humor—or a very different definition of “twin.” Forget Arnold Schwarzenegger and Danny DeVito.

Inside ClearML's AMD Instinct GPU Partitioning Integration: Architecture, Orchestration, and Resource Management

GPU underutilization costs enterprises millions annually, with expensive accelerators frequently running single workloads at a fraction of their capacity. According to ClearML’s 2025-2026 State of AI Infrastructure at Scale report, almost half (49.2%) of IT leaders at F1000 companies identified maximizing GPU efficiency across existing hardware, including shared compute and fractional GPUs, as their top priority for expanding AI infrastructure over the next 12-18 months.