Systems | Development | Analytics | API | Testing

Engineering the Path to Autonomous Quality

AI has rewritten the rules of software development. Developers can now generate, fix, and ship code in seconds. This is transformative, but as shared by SmartBear CEO Dan Faulkner, “The tools that help us build software are advancing much faster than the tools that ensure we can trust it.” For software quality to keep up with AI coding, today’s tools must evolve to remain effective. SmartBear is answering this call.

StartupSpot: ThoughtSpot Agentic Analytics at Startup Prices

In the world of early-stage startups, the real enemies aren’t competitors. It’s time, focus, and credibility. You’re racing to ship fast, protect runway, and look enterprise-ready long before you actually are. And in 2025, every serious buyer expects your product to have native, AI-powered analytics built in. Not basic dashboards, but a polished, conversational, ask-anything, data experience that delivers trusted answers.

LIVE from Current NOLA: Scaling Streaming in the AI Era | Life Is But A Stream

From the heart of the expo hall at Current NOLA, this special episode drops you into the conversations, energy, and the breakthroughs as they happened. Host, Joseph Morais, and co-host, Adi Polak talk with data streaming leaders and community voices to unpack how teams are using Apache Kafka and Apache Flink to power low-latency, AI-ready applications—covering patterns from usage-based billing and hybrid operations to cost efficiency and streaming governance. You’ll also hear how shift-left processing and governance make AI-ready data possible at scale.

Scenario Testing: A Complete Guide For QA And Software Teams

In contemporary software engineering, it is not sufficient to simply confirm that software applications function perfectly across all features – they also need to behave correctly with real users in real worlds. In this context, scenario testing has a significant role to play. Scenario testing fills the gap between functional testing and real user experience based on validating software by simulating end-to-end user journeys.

Better integration tests in Cursor using proxymock

Cursor is fantastic at cranking out code changes. I recently used it to splice a brand-new downstream API call into one of our Go microservices, and the diff looked great. The unit tests finished before I lifted my coffee mug, yet I still had zero certainty the change would survive contact with real traffic. That gap is all about integration tests, so I paired Cursor with proxymock and the outerspace-go demo service to prove the behavior end to end.