• S103 - Turbo-charge CICS knowledge for AI use

    Session Evaluation

    Experienced CICS system programmers are often the single best knowledge base their organisation has — but much of that value is scattered across inboxes, chat threads, ticket histories, and, most critically, “tribal knowledge” that never makes it into formal documentation. In this session, “Turbo‑charge CICS knowledge for AI use,” we’ll show how to turn that fragmented expertise into an AI‑ready knowledge layer that can reliably support modern AI assistants and specialised CICS agents. We’ll focus on pragmatic, low-friction techniques for extracting high-signal operational know‑how: structured SME conversations (remote-friendly), capturing real troubleshooting narratives, and using transcripts as raw material to produce curated, reviewable knowledge documents — so your best practices and hard-won lessons become reusable, searchable, and consistently available. You’ll learn how to prioritise what to capture (runbooks, “why we do it this way,” failure modes, and decision heuristics), how to reduce ambiguity and “it depends” answers into context-rich knowledge, and how to establish lightweight validation so the content stays trusted as CICS evolves. The goal: move from ad hoc expert interruption to scalable expertise-on-demand, enabling AI systems to go deep on CICS-specific questions rather than staying generic.

  • S108 - AI Coding Agents: Accelerating Delivery...and Breaking testing?

    Session Evaluation

    AI coding agents are dramatically increasing development velocity — but they are also amplifying a long‑standing problem: testing does not scale at the same rate as code generation. As agents produce more code, faster, and across broader surfaces, manual and semi‑automated testing quickly becomes the bottleneck, re‑introducing risk at precisely the point teams believe they’re moving faster.This session explores a central paradox of AI‑assisted development: the same agents that exacerbate the automated testing problem are also the most viable solution to it. We’ll examine why traditional test approaches struggle in an agent‑driven world, particularly for complex systems like CICS and z/OS where meaningful validation requires end‑to‑end integration testing, not just unit tests. We’ll discuss how AI‑generated code frequently lacks corresponding tests, why hand‑crafting integration tests does not keep pace, and how this discrepancy can undermine confidence in AI‑accelerated delivery.The second half of the session focuses on the opportunity: using AI coding agents to generate new Galasa integration tests alongside — or even ahead of — the code they produce.

    .

  • S109 - Integrating CICS with AI - MCP server in CICS TS 6.3

    Session Evaluation

    Come to this session to find out how to integrate CICS with AI in the latest CICS TS version 6.3

  • Will Yates