AI coding agents are dramatically increasing development velocity — but they are also amplifying a long‑standing problem: testing does not scale at the same rate as code generation. As agents produce more code, faster, and across broader surfaces, manual and semi‑automated testing quickly becomes the bottleneck, re‑introducing risk at precisely the point teams believe they’re moving faster.This session explores a central paradox of AI‑assisted development: the same agents that exacerbate the automated testing problem are also the most viable solution to it. We’ll examine why traditional test approaches struggle in an agent‑driven world, particularly for complex systems like CICS and z/OS where meaningful validation requires end‑to‑end integration testing, not just unit tests. We’ll discuss how AI‑generated code frequently lacks corresponding tests, why hand‑crafting integration tests does not keep pace, and how this discrepancy can undermine confidence in AI‑accelerated delivery.The second half of the session focuses on the opportunity: using AI coding agents to generate new Galasa integration tests alongside — or even ahead of — the code they produce.
|
. |


