Systems | Development | Analytics | API | Testing

Checklist for Distributed Tracing in Complex Data Pipelines

Distributed tracing is a method to track requests across interconnected systems, providing visibility into how data flows through complex pipelines. It helps identify bottlenecks, troubleshoot errors, and improve system performance. Here's what you need to know: Why It Matters: Traditional logging often misses the big picture in distributed systems. Tracing connects the dots, enabling root cause analysis, performance monitoring, and improved reliability.

Marco Palladino on the growth of APIs with AI | CUBE Conversation

Marco Palladino, co-founder and CTO of Kong, joins theCUBE Research’s John Furrier for an insightful conversation exploring the latest technological advancements at the company. As a recognized expert in the field, Palladino shares his perspectives on API innovations and the evolving impact of AI in a world increasingly interconnected by digital technologies.

How Delphix Virtualization Drives Cost Savings and Cloud-Readiness #shorts

Discover how Delphix’s data virtualization platform pays for itself by eliminating hardware costs, enabling on-demand environment creation, and preparing businesses for seamless cloud migration. With Delphix, teams can deliver higher-quality products faster, support multiple channels with ease, and strengthen collaboration across the organization.

How to Benchmark API Protocols for Microservices

API protocol benchmarking helps you measure and compare the performance of communication protocols like REST, GraphQL, and gRPC in microservices. It’s not just about speed - it’s about finding the protocol that works best for your system under realistic conditions. Benchmarking identifies bottlenecks, helps with scalability, and ensures your architecture performs well under load.

Data Automation for Enterprise Innovation: 6 Challenges to Solve

Enterprises like yours manage terabytes to petabytes of data daily. Collecting and storing this massive volume of information is already complex. But the real challenge lies deeper. It's in delivering data effectively, securely, and in ways that empower your teams to innovate. This blog will deep dive into the current state of enterprise data automation and examine the limitations of legacy solutions. Then, we will explore how top-performing organizations approach automation in their industries.

Service Catalog: The End of API Sprawl

Introducing Kong's Service Catalog, a powerful feature within the Kong Konnect platform designed to improve the lives of API producers. Learn how you can get a complete, 360-degree overview of your entire service ecosystem, not just the services running behind a Kong gateway. In this demo, you will learn: This tool is essential for platform teams who need to enforce governance and for application teams who need clear visibility into service details and dependencies.

Debugger in Kong Konnect

Are you spending too much time trying to track down failing requests or figure out performance issues within your Kong API Gateway? In this quick demo, we show you how to use Konnect's Debugger to save hours of debugging time by rapidly finding the root cause of latency and other issues. You'll learn how Debugger allows you to set up deep tracing sessions for your Kong data planes, collecting OpenTelemetry-compatible traces across the entire request and response lifecycle. We will walk you through a real-world scenario where we diagnose a spike in latency for a specific service.

How to Build a Single LLM AI Agent with Kong AI Gateway and LangGraph

In my previous post, we discussed how we can implement a basic AI Agent with Kong AI Gateway. In part two of this series, we're going to review LangGraph fundamentals, rewrite the AI Agent and explore how Kong AI Gateway can be used to protect an LLM infrastructure as well as external functions.