Systems | Development | Analytics | API | Testing

How to build a Copilot agent

A customer recently shared their debugging workflow with me. When an error shows up in Honeybadger, they import it to Linear, manually add context about where to look in the codebase, then assign GitHub Copilot to investigate. It works, but they asked a good question: could Copilot just access Honeybadger directly? The answer is yes—and it's easier than I expected.

Build Agentic Workflows: Expose API Orchestration as MCP Tools with Kong AI Gateway

Learn how to expose an API orchestration workflow as an MCP server using Kong AI Gateway, configure semantic guardrails, and build an agent with the Volcano SDK. We onboard GPT-4 behind /llm, orchestrate with DataKit, and debug MCP tools in Insomnia—end-to-end without adding server code.

Can We Still Trust the Code? #speedscale #qualityassurance #digitaltwin #trust #devops

The "Velocity Gap" is real. AI like Claude and GitHub Copilot are pumping out code faster than ever, but there’s a catch: Engineers don't trust it yet. We’re moving away from the old days of "clicking around" in a test environment, but how do we verify code at the speed of light? Ken breaks down why the future of QA isn't just "testing," it’s simulation. Video collab with @ScottMooreConsultingLLC Learn More: speedscale.com.

A Developer's Guide to MCP Servers: Bridging AI's Knowledge Gaps

Have you ever asked an AI assistant to generate code for a framework it doesn't quite understand? Maybe it produces something that looks right, but the syntax is slightly off, or it uses deprecated patterns. The AI is working hard, but it lacks the specific context it needs to truly help you. The Model Context Protocol (MCP) was designed to bridge this knowledge gap by giving AI assistants access to domain-specific knowledge and capabilities they don't have built in.

Multi-Node Training with ClearML

Orchestrating distributed AI workloads Distributed (multi-node) training has become a requirement rather than an optimization for many modern AI workloads. As model sizes grow, datasets expand, and training timelines tighten, teams increasingly rely on multiple machines, often with multiple GPUs each, to complete training efficiently.

Best 5 Tools for Monitoring AI-Generated Code in Production Environments

AI-generated code is no longer experimental. It is actively running in production environments across SaaS platforms, fintech systems, marketplaces, internal tools, and customer-facing applications. From AI copilots assisting developers to autonomous agents opening pull requests, the volume of machine-generated code entering production has increased dramatically. This shift has created a new operational challenge: how do you reliably monitor AI-generated code once it is live?