Systems | Development | Analytics | API | Testing

Best Serverless GPU Platforms for AI Apps and Inference in 2026

The performance of AI applications depends on its underlying infrastructure. Whether its fine-tuning custom models, performing real-time inference, deploying AI agents, AI workloads require high-performance hardware like Nvidia GPUs or next-gen AI accelerators from Tenstorrent. On top of performance, efficiently running AI workloads in production and at scale is a challenge.

Peeking Under the Hood of Claude Code

Everyone is talking about Claude Code, but few people understand the machinery running in the background. Today, we’re opening up the terminal to see how Anthropic’s coding agent manages state, runs tests, and fixes its own bugs. From the Model Context Protocol (MCP) to its unique React-based terminal UI, find out what makes Claude Code the most "senior" feeling AI assistant on the market.

Is Claude Code Spying for OpenAI? #speedscale #anthropic #openai #claude #codingagent

While analyzing network traffic, we found huge amounts of telemetry including chat snippets, being sent to statsig.anthropic.com. The irony? Statsig was recently acquired by OpenAI. In this video, we use proxymock to intercept the traffic and show you exactly what’s being sent from your terminal to Anthropic (and technically, OpenAI’s infrastructure).

Effective Public Speaking | Johanna Rothman | Testflix2025 | #testingcommunity

As AI becomes more capable, many managers assume that knowledge workers can be easily replaced by machines. Yet innovation still comes from people learning, collaborating, and sharing ideas. Rather than worrying about replacement, knowledge workers can actively demonstrate their value by developing strong public speaking skills.

Bias in, Bias Out : Knowing various Biases in Testing AI | Maheshwaran VK | Testflix 2025

Just like humans, AI systems are shaped by how they are brought up. In the case of Large Language Models, this upbringing happens through data collection, training, and productization. At each of these stages, bias can quietly enter the system through the data we select, the way models are trained, or the assumptions embedded into the final product. These biases, whether intentional or accidental, influence how models think, respond, and interact with users in the real world.

Why I Switched From Rest Assured To Keploy For Microservices Testing

If you’ve been using Rest Assured for API testing, you know how powerful it is. The syntax looks simple and easier to understand, but things get interesting when you have to write test cases and mocks for a microservices application that has more than 2 services. In this blog, I am exactly sharing my pain of writing Rest Assured test cases for a microservices application and why I switched to Keploy not because I am working here, but to show you the real pain points Keploy solves.

What is AI Governance? 2026 Framework Guide

While AI is revolutionizing the future of nearly every industry, it’s also created a unique set of challenges and liabilities that will need to be addressed as the area grows. Enter AI governance: a set of rules and best practices to ensure that AI is used effectively, securely, and responsibly. But what exactly does that mean, and why is it so crucial for businesses?

Exploring API Endpoints in Depth

API endpoints process requests that power our digital world—from social feeds updating in real-time to enterprise systems syncing global inventories across continents. APIs comprise over 57% of Internet traffic, yet API endpoints often remain misunderstood. This leads to design inconsistencies, brittle integrations, and costly maintenance issues. Sound familiar? If you've merged microservices and untangled mismatched status codes, you know the pain.

Testing Agentic AI | Robert Sabourin | Testflix2025 | #testingcommunity

This talk explores the challenges of testing agentic AI systems—AI that autonomously reacts to events and initiates processes. Drawing on decades of experience, Robert Sabourin emphasizes that testing begins and ends with risk. A three-dimensional model (business impact, technical risk, autonomy) guides evaluation. Testers generate ideas using a broad taxonomy, from capabilities and failure modes to creative and adversarial approaches. Continuous testing and monitoring ensure findings inform business decisions, emphasizing learning over correctness.