TestCon 5
Bringing together the minds shaping the future of software testing — bold ideas, fresh perspectives, and real-world impact.
TestCon 5 is more than a conference — it’s where the future of software testing takes shape. Across two immersive days, industry leaders, innovators, and changemakers come together to share bold ideas, real-world insights, and forward-thinking strategies. From evolving technologies to human-centered quality practices, this is your space to learn, connect, and elevate your impact in a rapidly changing digital world.
SPEAKERS
Industry experts sharing insights, experience, and innovation in software testing.
LLMs in Test Automation
Dr Klaudia Dussa-Zieger is team leader for consulting at imbus and a recognised expert in the field of software testing. She specialises in test management, the continuous improvement of test processes and testing of and with AI.
With a strong passion for testing and quality, she has been involved in customer projects for more than 25 years, is a trainer for the ISTQB® Certified Tester Foundation and Advanced Level and has taught for many years as a lecturer for software testing at the University of Erlangen-Nuremberg. She is also a frequent speaker at conferences on testing topics that concern her.
Dr Dussa-Zieger contributes her expertise to standardisation work: since March 2009, she has been the chairwoman of the DIN working group ‘System and Software Engineering’ and is actively involved in international standardisation, including ISO/IEC/IEE 29119. She has been a member of the German Testing Board (GTB) for over ten years and is currently its deputy chairwoman.. She is also President of the ISTQB® (International Software Testing Qualifications Board) and heads the AI Taskforce there, which deals with the challenges and potential of AI in software testing.
This talk opens with a concise introduction to Large Language Models (LLMs), outlining how they work and the principles that shape their capabilities in practical use. The core of the session focuses on concrete applications of LLMs using the generic test automation architecture as framework, demonstrating how these models can support, accelerate, and enhance various testing activities.
Several use cases illustrate the breadth of possibilities—from test design and analysis to automation support. A detailed example showcases a Retrieval-Augmented Generation (RAG) approach built with open-source LLMs running in a private cloud environment, highlighting both opportunities and architectural considerations. The session also offers a brief look at AI agents and their emerging potential in testing workflows, supported by a practical example.
Comment voyager avec un mooncake au durian
Olivier est le président du CFTL (https://www.cftl.fr).
Il occupe depuis plusieurs années un rôle clé au sein de l’ISTQB en tant que Governance Officer, président et vice-président (https://www.istqb.org).
Il assure des fonctions de liaison avec divers schémas de certification (IREB, IQBBA, Tmmi…).
Fort d’une carrière de plus de 25 ans dans la qualité logicielle, il est aussi le vice-président de ps_testware SAS, une ESN spécialisée dans le conseil et la formation.
Olivier participe régulièrement en tant qu’orateur à de nombreuses conférences internationales et a publié de nombreux articles dans la presse en ligne et sur le web.
A partir d’une anecdote vécue, le retard d’un bagage lors d’une correspondance mouvementée, je vais parcourir en détail les différentes étapes du processus de suivi des bagages, démontrer comment, alors que toutes les exigences semblent pensées et remplie, le système peut malgré tout défaillir et occasionner perte de confiance et insatisfaction. La description des sentiments à chaque rebondissement permettra à l’audience de mieux appréhender la différence entre Ux (expérience utilisateur) et processus papier et comprendre pourquoi l’Ux est capitale en fin de compte.
L’ensemble est traité sur le mode humoristique et décalé qui fait ma signature, un slide bonus est réservé pour les futurs voyageurs.
The Quality Journey Through Web Protocols: From REST to the AI Frontier
Sebastian Małyska, Polish Quality Board President (PQB), ISTQB® Secretary
A software quality enthusiast with over 20 years of experience proving that “it works on my machine” is not a testing strategy. Has worked as a manual tester, automation engineer, QA Lead, and QA Manager, basically wherever quality (or patience) was missing. Speaker at national and international conferences and a program committee member who enjoys talking about QA almost as much as reporting other people’s bugs. Codes mainly in Python, because life is too short for bad code and even worse languages. President of the Polish Quality Board, Secretary of the ISTQB® Board, member of the IREB® community, and co-founder of ŁuczniczQA in Bydgoszcz. On a daily basis, fights for better quality, fewer critical defects, and more common sense in IT projects.
As software architectures evolve, the way we communicate between systems becomes increasingly complex. For a Quality Expert, “testing the API” is no longer a one-size-fits-all task.
It requires a deep understanding of unique architectural styles, from the ubiquity of REST and the rigid standards of SOAP to the real-time demands of WebSockets and Webhooks.
In this session, Sebastian Małyska—drawing on 20 years of QA leadership—strips away the buzzwords to provide a practical roadmap for testing modern communication.
We will explore:
The “Big Five”: Actionable strategies for REST, SOAP, GraphQL, WebSockets, and Webhooks.
The AI-API Gap: Why 89% of developers use AI, but only 24% are ready to test APIs for AI agents.
The Model Context Protocol (MCP): A first look at the emerging standard that will redefine how we test AI-to-API integrations.
Your Tests Don’t Matter (Unless They Create Business Value)
“There is more than one way to do it, but do it right the first time!” – Joel Oliveira
Joel Oliveira’s career has been a thrilling journey through the exciting world of software development. With over two decades of experience, he has tackled a variety of roles, including developer, tester, technical and project manager, quality and engineering manager, and everything in between. Throughout his career, Joel has led teams of engineers in a diverse range of industries, from telecommunications to government, finance, defence, and aerospace.
Driven by his dedication to improving the recognition and proficiency of the testing community, Joel founded the first online testing community in Portugal in 2009 and went on to establish the PSTQB – Portuguese Software Testing and Qualifications Board the following year. His expertise in the field has earned him a place in the ISTQB working groups since 2011.
As a mentor, Joel takes pleasure in sharing his experiences and knowledge with others. You can often find him giving talks and workshops on a variety of topics, including software testing (manual, exploratory, automation, mobile, performance, security), engineering processes (waterfall, agile, continuous improvement, and CMMI), and career management (for testers and beyond).
In many organizations, testing is still measured by volume: number of test cases executed, defects logged, or automation coverage achieved. But these metrics say very little about what actually matters: business outcomes.
At the same time, AI is rapidly taking over repetitive testing activities, forcing a fundamental question: what is the real value of a tester?
This talk challenges the traditional testing mindset and introduces a new perspective focused on value. It explores how testers can evolve from execution-focused roles to strategic contributors who influence decisions, reduce risk in meaningful ways, and maximize business impact.
We will discuss which metrics truly matter, which skills need to evolve, and what changes in thinking are required to avoid becoming irrelevant.
Because in a world where AI can execute tests, only humans who understand value will remain essential.
The GASQ AI Engine — How We Built a Dedicated AI System for Software Testing Certification
Werner has been active in software quality certification since 2001. He was involved in ISTQB from its early days at the international level and has contributed to the founding of several standardization organizations in the software testing and quality domain. Today, he serves as Vice Governance Officer of ISTQB.
Werner is the CEO of GASQ Service GmbH, an international certification body he co-founded in Brussels in 2005. Under his leadership, GASQ has grown into a globally recognized organization conducting 20,000 examinations per year across more than 100 countries in 12 languages. GASQ serves as a certification body for internationally recognized frameworks including ISTQB, A4Q, IREB, and ISAQB.
GASQ has distinguished itself through technological self-reliance — developing nearly all technical products in-house, including proprietary examination software with more than 30 integrated security features and the GASQ AI Engine: a dedicated AI system built specifically for the certification and assessment domain.
Werner brings 25 years of hands-on experience at the intersection of certification, standardization, technology, and quality assurance in the global software industry.
In 2023, GASQ set out to build an AI system for marking certification exams. The early results were sobering: general-purpose LLMs could not deliver the accuracy required for professional certification. Rather than abandoning the idea, we built a dedicated AI Engine from the ground up — combining a comprehensive Body of Knowledge across Software Testing, Requirements Engineering, and Business Analysis with a RAG architecture and multiple large language models. A critical design principle: the system learns from expert corrections, but cannot be manipulated. Submitting the same wrong answer a thousand times will not change its knowledge.
This talk traces the journey from that first experiment to a fully operational AI Engine powering four products: the ISTQB Practical Tester by A4Q, ISTQB Prep for unlimited exam question generation, the World’s Best Tester initiative providing a global tester ranking as a community contribution, and a recruitment solution that gives HR teams an objective, AI-driven assessment of practical testing competence — tailored to company-specific needs.
The architecture is extensible by design. Usability, GDPR, and ISO 27001 are next, demonstrating that the approach works for any structured educational domain.
Key Takeaways:
– Why general-purpose LLMs fail at certification-level assessment and what it takes to build a specialized AI system
– How a dedicated RAG architecture with expert-supervised learning ensures quality and prevents manipulation
– Practical results: from AI-powered exam preparation to global tester rankings and recruitment solutions
– How to design an extensible AI knowledge platform that can expand into new domains
Talk Length: 45 minutes (including Q&A)
Track Suggestion: AI in Testing / Innovation / Future of Testing
TMMi Basics (what is it, and how to get started)
Zsolt is a Test Manager, trainer, and test process improvement expert with extensive international experience across the banking, aviation, energy, logistics, and retail sectors. He applies ISTQB, IREB, and TMMi standards in a practical and results-oriented way, delivers testing trainings, regularly speaks at international conferences, and hosts professional podcasts focused on software testing.
He serves as Vice President of the Hungarian Testing Board (HTB), Secretary of the TMMi Foundation, and Head of the TMMi Hungary Local Chapter, and is also a member of the ISTQB Marketing Working Group.
Between 2002 and 2019, he worked at Lufthansa Systems. Since then, he has been the founder of the IT consulting companies DHX Ltd. and TesterLab.
AbstractThis presentation introduces TMMi from a true beginner’s perspective—what it is, why it exists, and how it can be practically used. Building on the People–Process–Technology (PPT) framework, we explore how testing fits into the broader delivery system of organizations. We highlight the different goals of key stakeholders—from executives to test leaders to testers—and how these perspectives align.
We then look at how testing evolves over time: what happens if teams do not improve, and what changes when development is driven systematically using TMMi. Real-world outcomes from organizations that have already taken this journey illustrate the tangible benefits.
Finally, the session provides a simple, actionable 4-step starting approach for teams who want to begin their TMMi journey and improve their testing maturity.
L'IA generative pour les organisations
With over six years of experience as an ISTQB® Trainer and software testing expert, I have a proven track record of delivering high-quality training and testing solutions in complex, interconnected domains such as IoT and AI. As the CEO of Certilog FZCO, a leading software testing consultancy in the UAE, I oversee the delivery of testing services and projects for clients across various sectors and regions.
As the President of the UAESTQB, the official representative of the ISTQB® in the UAE, I provide strategic leadership and direction to the Board and foster collaboration with partners and stakeholders in the testing community. My passion for advancing the testing profession and enhancing the testing standards is reflected in my involvement in several groundbreaking research and innovation projects, such as H2020 ARMOUR, H2020 Autopilot, and H2020 SMESEC, where I have contributed to the development of testing methodologies, frameworks, and tools for IoT security, AI detection, and cyber-security.
With a Ph.D. in Computer Science and a focus on Model Based Testing for IoT web services, I have a deep understanding of the technical and business challenges and opportunities in the emerging fields of IoT and AI. I’m always eager to share my knowledge and insights with others and to learn from the best practices and experiences of the industry.
TBC
Algorithms to Culture: Redefining Quality in the Age of AI
Born in 1981 in Ankara, Barış Sarıalioğlu graduated from TED Ankara College and earned his B.Sc. in Electrical and Electronics Engineering from Middle East Technical University (METU). He completed his M.Sc. in Computer Engineering at METU, followed by the joint Executive MBA program of Sabancı University & MIT Sloan School of Management. Between 2014 and 2018, he pursued his doctoral studies in Management Science at SMC University.
With more than 20 years of experience in digital transformation, artificial intelligence, software development, agility, and software testing, as well as user experience design, design thinking, and innovation management, Sarıalioğlu has contributed to global projects across industries such as telecommunications, banking, automotive, aviation, defense, e-commerce, and insurance. He has led teams of varying sizes, aligning corporate goals with innovative strategies.
He has participated in over 100 international conferences in more than 50 countries as a speaker, panelist, moderator, and program committee member. In addition to two technical books on software quality (2013, 2015), he has authored three children’s books since 2021, introducing science and technology to young readers through storytelling. He is the co-founder and managing partner of TesterYou and Codejust, and also teaches as a part-time lecturer at various universities.
Artificial intelligence is transforming software systems from deterministic machines into evolving, probabilistic organisms. Yet while algorithms learn fast, organizations often struggle to adapt their understanding of quality.
In this keynote, Barış Sarıalioğlu explores why AI failures are rarely caused by weak models—and why they are far more often rooted in organizational culture, trust, and decision-making habits. Drawing on real-world cases from banking, automotive, and large-scale enterprise systems, the talk challenges traditional testing mindsets and introduces a new perspective on digital quality culture.
Blending ISTQB principles, Modern Testing thinking, and insights from the philosophy of science, this session reframes testing as a leadership responsibility rather than a technical activity. Participants will leave with a clearer understanding of how quality, risk, ethics, and learning must evolve together in AI-driven systems.
Is AI a must to assess and improve your test process ?
IQBBA, IREB, ISTQB, A4Q, TMMi : des acronymes qui peuvent changer votre vie!
Testing is one of the main success factors to enhance the quality of software and information systems. After 23 years of involvement in IT projects, as a developer, tester and test manager, I am now operating on large IT projects in order to contribute concretely to their success by ensuring that the test activities are professionally managed and implemented at all levels, by all stake holders. Audit, Consulting and Training are my main activities to improve the overall testing process with a single goal in mind: a better quality for Software and Information systems by meeting the requirements of the Business Owner and keeping final users happy.
If you are in a test management position, you know that the cost of testing might unfortunately be considered as a loss of money! For this reason you must not only be able to measure the cost of the test activities but also to demonstrate their profitability based on qualitative and quantitative KPIs. And even if everything seems to be in place and profitable, you have to keep improving !
The TMMi framework is a world wide mode used to assess, improve and certify an organization with regards to their test activities. But what about AI in this assessment and improvement process? Shall we use it? and how ? In this talk, Eric, a TMMi lead assessor, will answer the question with relevant and reusable tips !
Talk 2 – IQBBA, IREB, ISTQB, A4Q, TMMi : des acronymes qui peuvent changer votre vie!
Vous en connaissez sans doute un ou deux, peut-être trois…. mais savez-vous ce qui se cache derrière ces acronymes? au delà de simples certifications, ce sont des outils puissants pour mieux travailler, pour progresser plus vite et bâtir une carrière . Sur la base des ses plus de 25 ans d’expérience, Eric vous présentera un dispositif qui vous aidera à prendre dès maintenant les bonnes orientations!
Application du Lean Management pour améliorer la qualité
Nirmala Saneechur est Test Manager avec 20 ans d’expérience dans la qualité logicielle, la gestion d’équipes et l’optimisation des processus de test. Reconnue pour son leadership collaboratif et sa rigueur, elle a piloté la mise en place de capacités QA complètes, couvrant les tests fonctionnels et non fonctionnels ainsi que l’amélioration continue des pratiques qualité. Secrétaire du MSTQB (Mauritian Software Testing Qualifications Board), elle s’implique activement dans la promotion des standards internationaux du test logiciel. Certifiée Lean Green Belt (2025), elle met en œuvre des approches d’efficacité opérationnelle pour optimiser processus, qualité et performance des équipes. Polyvalente et orientée résultats, elle excelle dans la planification, la gestion de projets et l’accompagnement des organisations vers l’excellence.
Le Lean Management offre une approche pragmatique pour améliorer la qualité en éliminant les gaspillages et en renforçant l’efficacité des processus. Je présenterai comment les principes Lean ont été appliqués dans notre projet pour obtenir des gains mesurables.
Nous avons mobilisé trois outils clés :
Visual Management : rendre l’information visible pour détecter rapidement les problèmes et faciliter la collaboration.
PDCA : structurer l’amélioration continue grâce à des cycles courts de planification, test, vérification et ajustement.
A3 : analyser les causes profondes et définir des actions correctives robustes.
Grâce à cette démarche, le projet a enregistré une réduction notable des défauts, une meilleure stabilité du processus et une hausse de la satisfaction client.
The Other ROI: Measuring Your Automation’s Regret Over Investment
As a Lecturer at Vilnius University, Dmitrij teaches Software Testing to the next generation of computer scientists. He grounds his academic instruction in deep industry experience from his current role at Insoft, where he oversees service delivery and staff development. His diverse career spans automation, development, and the management of complex, mission-critical systems.
We’re usually taught that ROI is a simple math problem, but after delivering all kinds of test automation projects, I’ve realized the traditional definition is a bit of a trap. Trying to calculate the “true value” of automation often feels like chasing a ghost, as numbers can be affected by office politics, irrational emotions, and the messy reality of software development.
In this talk, I’ll share why pinning down a single ROI figure could be such a nightmare. Drawing on my experience with both successful frameworks and “maintenance monsters,” I’ll share how I have learned to talk about the value of test automation without getting lost in the absurdity of the spreadsheets.
Et si nous étions tous des bio-développeurs logiciels ?
Entre l’urgence de livrer vite et l’envie de tout peaufiner à l’infini, il existe un précieux équilibre. C’est exactement là que je t’emmène.
Depuis plus de 20 ans, j’aide les petites et grandes entreprises à optimiser leurs processus de développement et à retrouver ce fameux équilibre entre performance et pertinence.
La qualité logicielle et la gestion des changements qui en découlent, c’est mon expertise. Mais ce qui me passionne profondément, c’est la clarté : clarifier ce qui bloque, ce qui se répète inutilement, ce qui épuise les équipes sans faire progresser les projets. Rendre les processus plus fluides, plus humains et plus intelligents. Et surtout : partager ce savoir pour qu’il devienne collectif.
Je suis convaincu qu’on peut avancer avec rythme sans sacrifier la qualité. Ce que je vise, c’est ce moment où tout s’aligne avec justesse — où l’on peut être fier de ce qu’on livre, non pas parce que c’est parfait, mais parce que c’est réfléchi.
Disons-le franchement : personne n’aime livrer quelque chose qui devra être refait le lendemain matin.
Et si le développement logiciel n’était pas qu’une affaire de construction rigide et mécanique, mais un processus vivant à cultiver avec attention ?
Dans cette session, nous explorons une posture essentielle pour évoluer dans des environnements agiles et complexes : celle du bio-ingénieur logiciel. Ce rôle émerge dans un contexte où les approches linéaires atteignent leurs limites face à l’incertitude, à l’émergence et à la complexité croissante des systèmes.
En s’inspirant de la théorie des systèmes complexes et du monde biologique, cette conférence propose de redéfinir l’agilité non plus comme une méthode, mais comme une capacité à observer, diagnostiquer et guider des systèmes vivants en constante évolution. Cette posture s’applique à tous les niveaux de l’organisation : de la gestion de portefeuille agile à l’agilité produit, en passant par les pratiques de DevOps, SRE et automatisation.
Elle encourage un leadership agile fondé sur l’humilité et la résilience, soutient la performance des équipes par le bien-être et l’expérimentation continue, et alimente une culture organisationnelle vivante et inclusive.
Elle s’étend aussi à l’agilité responsable, en favorisant la durabilité, la frugalité technologique et l’adaptation écosystémique.
Enfin, elle ouvre de nouvelles perspectives pour naviguer dans les environnements où l’intelligence artificielle et les systèmes adaptatifs transforment notre rapport au logiciel.
Dans un monde où les plans rigides ne tiennent plus, le bio-ingénieur logiciel adopte une posture humble, exploratoire et systémique — au service d’une agilité profondément ancrée dans le vivant, dans et hors du secteur TI.
Testing Is a People Business - Why Quality Fails Without Human Connection
For over 20 years in the field of software development and testing, I now support companies in building better software, making processes more agile, and advancing software development practices.
Across various industries, and in both traditional and agile projects, I combine my technical expertise with my coaching skills to meet projects and teams where they currently are and support them through change and transformation.
This talk explores why software testing is fundamentally a people-centered discipline, even and especially in the age of artificial intelligence. Drawing on 25 years of experience in software projects, the presentation challenges the idea that advanced tools or AI will replace testers. While modern testing tools can generate test cases, heal locators, analyze risks, and automate execution, they cannot solve the core problems that cause quality to fail: miscommunication, silo thinking, lack of trust, and weak collaboration.
The session argues that AI accelerates what and how we test, but it also amplifies existing team dynamics. Without strong human interaction, even the best test strategies, CI/CD pipelines, and AI-powered solutions lose their effectiveness. Through personal failure stories and real project experience, the talk shows how treating testing as a purely technical or inspection-focused activity leads to frustration and poor outcomes. The turning point comes with the realization that testing is a social activity that depends on dialogue, shared understanding, and leadership through influence rather than authority.
Attendees gain a clear perspective on the tester’s most underrated skill: the ability to connect people. The talk highlights how communication, breaking down silos, building quality champions, fostering psychological safety, and leading without formal authority directly impact software quality. In an AI-driven development landscape, these human capabilities become the deciding factor between faster feedback that improves quality and faster failure that merely ships problems sooner.
Beyond the 404: Breaking the Curse of Slow Systems with AI
Sumitra Godara is a Lead Performance Tester and QA Automation specialist with 10 years of experience helping teams build faster, more scalable, and more reliable systems. She specializes in performance testing, test automation, and CI/CD-driven quality practices, helping organizations identify bottlenecks before they impact users. Sumitra is particularly interested in applying AI-assisted analysis and modern observability practices to accelerate performance investigations and transform production insights into smarter testing strategies.
Revealing Performance Bottlenecks with AI, Testing, and Monitoring
Production suddenly feels slow. Dashboards spike, alerts go wild, and someone inevitably asks: “What did we break this time?”
Instead of guessing, modern teams use AI-driven insights, live monitoring, and targeted performance testing to quickly pinpoint the real bottleneck. In this session, we’ll walk through a real performance incident and show how these techniques help teams move beyond the obvious signals to uncover hidden causes, reproduce the issue, and turn production chaos into actionable insight.
CO-SPEAKER OF SUMITRA GODARA
Rao is a Quality Engineering Lead with over 10 years of experience driving test management, automation strategy, and QA capability across enterprise systems in banking, payroll, and large-scale platforms. He has led end-to-end testing initiatives, built scalable automation frameworks, and integrated quality practices into CI/CD pipelines to improve release confidence and delivery speed. Rao is also a QA and Test Automation trainer, regularly mentoring engineers and delivering hands-on training on modern testing tools and practices. Passionate about advancing quality engineering, he focuses on combining test management, observability, automation, and AI-assisted insights to diagnose complex system issues and strengthen software reliability.
Taking Action to Lead Software Testing
Insights from children about testing
Mr. Kari Kakkonen has worked in software testing for almost 30 years. He is the co-author of ‘ACT 2 LEAD Software Testing Leadership Handbook’.
Kari is the 2021 EuroSTAR Testing Excellence Award winner, Tester of the Year in Finland Award 2021 winner, and DASA Exemplary DevOps Instructor Award 2023 winner. He is the author and CEO of Dragons Out Oy, which created a fantasy book to teach software testing to children. Kari works at Gofore as Service Owner, Customer Expertise Development, running a training business and providing software testing, agile, DevOps, and AI training services in Europe. He has an M.Sc. from Aalto University (aalto.fi). He works mostly with agile testing, lean, test automation, DevOps, and AI.
Kari has been working in ICT consulting, training, finance, insurance, pension insurance, public sector, embedded software, telecom, gaming, and industry domains.
Kari was on the Executive Committee of ISTQB (istqb.org) 2015-2021. He is on the Board of Directors of TMMi (tmmi.org). He is the Treasurer of FiSTB (fistb.fi). He is the co-founder of FiSTB Testing Assembly, the biggest testing seminar in Finland, for which he has been a co-organiser for 10 years.
Kari is a singer, snowboarder, kayaker, husband and dad.
Kari has given public talks and is a passionate advocate for software testing and QA. He is active on LinkedIn and welcomes you to network.
In the evolving landscape of software development, effective testing leadership has become the key to delivering high-quality products. However, many organizations struggle with challenges: misunderstood testers, testing not valued, insufficient testing resources, and ineffective testing practices. This talk revolutionizes your approach to software testing leadership, empowering you to drive meaningful change within your organization.
Drawing from extensive experience spanning decades of leadership, mentoring, and testing, and distilled wisdom from the book “ACT 2 LEAD – Software Testing Leadership Handbook,” Kari will explain the critical actions that leaders at all levels, including you, must take to elevate software testing from a siloed activity to a company-wide strategic initiative.
We’ll go through eight principles. We’ll explore how to add testing into all software development actions, how to choose the right testing based on context, how to create visibility for quality of testing practices and test results, and how to strike the perfect balance between automated testing and human-driven exploration. Then talk about learning as a way to improve, enabling a culture of quality and continuous improvement, aligning testing efforts with product risks, and implementing diverse testing approaches for finding diverse sets of defects and observations.
Testing leadership empowers everyone doing testing, significantly improves product quality and customer satisfaction, reduces production defects and associated costs, and accelerates delivery cycles without compromising on quality.
You can transform your software testing leadership approach. By applying the principles shared in this talk, you can drive improvements in your organization’s testing practices and software quality, reflecting positively on your organization’s business. You’ll be ready to implement these principles at all leadership levels of your organization.
Talk 2 – Insights from children about testing
The popular Dragons Out book about software testing is much liked. It has been read in schools. Schools have used the free supporting software testing presentation to organize a lesson or two to discuss software testing.
I’ve been a guest lecturer in several classes at advanced schools. On those occasions, I’ve interviewed children about how they learned software testing and what do they think about it. Now it is time to reveal the secrets of how children learn to test. There are also insights into how adults could learn in a much more effective and fun way. We’ll talk about storytelling as a way to learn but also of hands-on practice and fun as an essential learning element.
I’ll also recap the book project briefly. I chose to write a book about software testing targeted to children to help in the great lack of software testing in the ICT business, the lack that just seems to be increasing. I also wanted to show people an alternative route into the exciting world of software. It is not only the coding that exists in the software world. The process took almost three years, but in that time I have founded a company for the book project, written the book in Finnish, had numerous pilot readers and review rounds, translated the book to English, found publishers for both Finnish and English editions, organized a crowd-funding campaign, gave many lectures at schools and conference speeches at event and fairs. The project culminated in associations and companies donating books to schools and a diploma of donations being handed to the Minister of Education. And finally, of course, I as the author getting the EuroSTAR Testing Excellence Award, which I feel is not only about my 25 years of dedication to the software testing community but especially this book for children initiative.
I’ll reflect on how we can take insights from how children learn into how adults learn.
Smarter Testing with AI: The Next Evolution in Quality Engineering
Bhavna is a seasoned IT professional with over 14 years of experience delivering technology and quality engineering initiatives for global clients across multiple industries. She specializes in software quality assurance, test management, and end-to-end delivery across the Software Development Life Cycle (SDLC).
Throughout her career, she has successfully led cross-functional teams, implemented effective QA strategies, and driven continuous improvement in Agile and Scrum environments. Her expertise spans functional and non-functional testing, test governance, and people leadership, helping organizations deliver reliable and high-quality software solutions.
Bhavna is particularly passionate about innovation in software testing and the evolving role of Artificial Intelligence in quality engineering, exploring how AI can enable smarter testing, improve efficiency, and transform modern QA practices.
As software systems become more complex and release cycles accelerate, traditional testing approaches are struggling to keep up. Artificial Intelligence (AI) is emerging as a powerful enabler in transforming how testing is performed, moving beyond basic automation to intelligent, data-driven quality engineering.
In this session, “Smarter Testing with AI: The Next Evolution in Quality Engineering,” we will explore how AI can enhance software testing through intelligent test generation, self-healing automation, predictive defect detection, and optimized test execution. The talk will highlight practical examples of how AI helps teams improve test coverage, reduce maintenance effort, and accelerate delivery
CO-SPEAKER OF BHAVNA RAMTOHUL
Vrigesh is Director of AI & Innovation at BDO IT Consulting, where he leads initiatives in artificial intelligence, digital transformation, and emerging technologies. With over 20 years of international experience across Africa, Europe, the Middle East, and the United States, he advises organisations on leveraging technology to drive innovation, efficiency, and strategic growth.
Previously Executive Director of the Mauritius Emerging Technologies Council (METC), Vrigesh has played a key role in shaping national AI and emerging technology strategies and advising government and industry leaders on digital transformation. He is a member of UNESCO’s AI Ethics Experts Without Borders and actively contributes to national and international technology advisory initiatives.
Building Sustainable End-to-End Test Automation at Scale
Lilia Gargouri is Head of QA at mgm technology partners. She stands for sustainable, scalable, and efficient quality assurance in complex eGovernment projects. She is committed to accessibility, trains career changers, and passionately advocates for greater visibility and equal opportunities for women in IT. As the inventor of a model-based, low-code test automation tool, she has significantly simplified and accelerated quality assurance in critical eGovernment projects. A member of the German Testing Board, she actively drives the development of industry quality standards.
End-to-end test automation has become a cornerstone of modern software quality assurance. Yet in large and heterogeneous system landscapes, many teams quickly hit limitations: fragile tests, high maintenance effort, and a lack of scalability often prevent automation from realizing its full potential.
In this talk, I will show how thoughtful architectural principles, clear testing strategies, and targeted organizational measures can lead to a sustainable E2E testing ecosystem. Drawing on more than ten years of hands-on experience in automating enterprise applications, I will present proven techniques that significantly improve test stability, reusability, and overall efficiency.
Using real-world examples, the talk illustrates how teams can design their tests to remain maintainable and reliable even as complexity grows—regardless of platform, browser, or framework. Attendees will walk away with concrete recommendations for strategically evolving their test automation and ensuring its long-term success.
The Journey to Smarter Playwright Testing with AI and MCP
Vaishnavi Dookheet is a QA Engineer at SD Worx, working within the Test Centre of Excellence (TCoE), where she operates at the intersection of quality engineering, test automation, and artificial intelligence. As a young professional, she has gained strong hands-on experience across modern testing practices, continuously building expertise that drives her curiosity, growth, and passion for innovation.
With an academic background in Data Science, Vaishnavi brings an analytical and pattern-driven mindset to software testing. This perspective enables her to explore and implement AI-driven testing and automation approaches, helping teams achieve smarter, faster, and more efficient quality processes.
In her role, she actively supports agile squads by contributing to test strategy, automation maturity, and process improvement. She has hands-on experience with automation frameworks and is particularly passionate about Playwright, regularly delivering workshops and knowledge-sharing sessions. Vaishnavi enjoys simplifying complex concepts, empowering teams, and fostering a strong quality culture.
She is especially interested in how artificial intelligence can be embedded into everyday testing and automation workflows, viewing AI not as a trend, but as a catalyst for meaningful innovation in quality engineering.
Outside of work, Vaishnavi enjoys hiking, exploring nature, and spending time by the sea. She also coaches swimming alongside her father, teaching children a life skill she deeply values.
Fun fact: she can patiently debug the most complex issues—yet still struggles to choose a meal when she’s hungry.
At SD Worx, agile delivery and continuous improvement demand testing practices that are both effective and adaptable. This session shares our journey toward smarter Playwright testing by applying modern testing techniques and augmenting automation with AI and the Model Context Protocol (MCP). From the perspective of the SD Worx Test Centre of Excellence, we’ll show how we evolved beyond traditional test execution to gain better context, clearer insights, and more meaningful feedback from our test results. Rather than live code demonstrations, the talk highlights our success story through real snapshots—including AI prompts, personas, and MCP interactions—to illustrate how intelligence and context can reduce noise, improve collaboration, and strengthen confidence in quality across teams.
CO-SPEAKER OF Vaishnavi Dookheet
Fayzaan Soobrattee is a Lead Test Automation Engineer based in Mauritius, with over seven years of experience in enterprise HR and web-based platforms. He specializes in building scalable, automation-first frameworks that improve software reliability, accelerate release cycles, and embed quality throughout the development lifecycle. Currently part of the Test Center of Excellence at SD Worx, Fayzaan drives both functional and non-functional testing initiatives, leveraging AI-assisted test design and Model Context Protocol (MCP) to create intelligent, optimized, and high-impact testing solutions. He is a strong advocate of shift-left testing, DevOps integration, continuous quality improvement, and data-driven automation strategies.
Previously, he served as a Software Developer in Test at Dayforce (formerly Ceridian), where he contributed to the Dayforce HCM suite, particularly within Recruiting modules, developing and maintaining BDD-based automation frameworks using SpecFlow and enabling QA teams to scale automation efficiently. He began his career at Proximity IO and Hangar Worldwide, transitioning from manual testing to automation engineering while gaining hands-on expertise in UI, API, load, performance, and security testing. Fayzaan holds a BSc in Computer Science from the University of Mauritius and is ISTQB Foundation certified. Passionate about innovation in quality engineering, he continuously explores advanced automation, performance optimization, and accessibility validation to deliver resilient, future-ready software solutions.
When AI Becomes Your Testing Colleague: Agentic AI in QA
Nishan Portoyan is an AI expert and Software Quality Ambassador with over 18 years of experience in software testing, test automation, and requirements engineering. He is deeply engaged in the testing community at both the international and national level, where he actively drives the evolution of testing practices and the integration of AI into the profession.
In his role at Infometis, Nishan leads AI consulting and quality initiatives, supporting companies in securely adopting AI, designing test automation strategies, and developing domain-specific AI assistants. His recent work focuses on Agentic AI autonomous systems that plan, adapt, and execute testing tasks, which he has successfully introduced in the financial industry to demonstrate the future of testing.
A frequent international conference speaker and trainer, Nishan is passionate about sharing practical experiences, lessons learned, and innovative approaches that bridge technology and business. He combines deep technical knowledge with a strategic mindset, helping organizations achieve excellence in quality while navigating the opportunities and risks of emerging AI technologies.
Agentic AI is moving software testing beyond prompt-based assistance toward autonomous support that can plan, decide, adapt, and act across testing activities. This shift opens new possibilities for quality engineering, especially in environments where speed, complexity, and reliability matter at the same time.
In this talk, I will show how Agentic AI can transform software testing in real-world settings through two practical examples. The first example focuses on a regulated financial environment, where agent-based AI was introduced to support testing activities within a private banking context. The second example highlights another practical use case that shows how autonomous AI agents can improve testing workflows, strengthen coverage, and support faster feedback loops.
Rather than presenting Agentic AI as a distant vision, this session focuses on what happens when such systems are applied in practice. I will share what worked, where the real value emerged, and which challenges appeared once autonomous behavior entered the testing process. This includes lessons learned around control, traceability, quality of outcomes, and the level of human oversight needed to build trust in highly sensitive environments.
Attendees will gain a practical view of how Agentic AI is already reshaping testing, where its real strengths lie, and what teams need to consider before introducing autonomous test agents into their own context.