Systems | Development | Analytics | API | Testing

Zero-Trust for LLMs: Applying Security Principles to AI Systems

Zero-trust security ensures you verify every interaction, whether it’s a user, system, or API, before granting access. For large language models (LLMs), this approach is vital to prevent data breaches and maintain control over sensitive information. Here’s how zero-trust principles apply to LLMs: Identity Verification: Use multi-factor authentication (MFA) for users and secure API keys for systems. Regularly review and update permissions.

How To Design Tests For Unpredictable Behavior

Agentic AI systems don’t behave the same way twice, so traditional test cases with fixed inputs and expected outputs no longer work. But unpredictability doesn’t mean untestability. Instead of checking for exact answers, testers must probe for unsafe, misaligned, or unintended behavior. Techniques like scenario replay, adversarial prompting, constraint injection, and behavioral thresholds help uncover risk, drift, and reasoning errors.

From Scripts to Systems - Why Agentic AI Breaks Traditional Testing

Agentic AI systems don’t follow scripts — they make decisions. That means your tests can all “pass” while the AI still hallucinates, misfires, or behaves unpredictably. Traditional QA, built for deterministic workflows, simply isn’t enough. Testing these systems is less like checking a vending machine and more like evaluating a junior employee: you’re judging reasoning, not just output.

How to migrate AWS MSK to Express Brokers with Lenses K2K Replicator

AWS MSK has become popular because it deploys Kafka easily and bills alongside other AWS services. Over the past few years, AWS announced Express Brokers, a new cluster type that offers unlimited storage and separates brokers from storage resources. This simplifies scaling and reduces the time needed to rebalance topics when adding or removing brokers.

AI-Ready DataOps: Rethinking MDS for LLMs

AI is changing how data teams operate. Is your pipeline ready? Today, data isn't just powering insights, it's fueling real-time decisions and AI/ML models. That means teams now face stricter requirements around data freshness, reliability, orchestration, and delivery speed. In this webinar, Hugo Lu, Founder & CEO at Orchestra will explore what it really means to build AI-first data operations & how leading data teams are adapting their infrastructure, workflows, and tooling to support this new era of model-driven development.

Week 3 CFO Masterclass: Building Predictive Intelligence in Finance

Discover how finance teams can move beyond static reporting with predictive intelligence! In this masterclass preview, see how your team can evolve into proactive advisors by leveraging leading indicators and predictive insights. Watch our Week Three Masterclass and dive deeper with the full blog: “Beyond Financial Statements: Building Predictive Intelligence” for strategies to revolutionize your finance workflows.

Real-Time AI at Scale: The New Demands on Enterprise Data Infrastructure

Real-time AI is transforming how businesses process and use data, demanding faster, more reliable, and scalable infrastructure. Unlike older batch processing systems, real-time AI provides instant insights for applications like fraud detection, personalized recommendations, supply chain adjustments, and predictive maintenance. However, scaling these systems introduces challenges like managing massive data streams, ensuring low latency, and maintaining security.

Announcing terraform-provider-konnect v3

It’s been almost a year since we released our Konnect Terraform provider. In that time we’ve seen over 300,000 installs, have 1.7 times as many resources available, and have expanded the provider to include data sources to enable federated management of your Konnect organization. There have been many changes in the last year, but there are some changes that we’ve been holding off on as they would break your CI/CD pipelines.