top of page

Designing AI Conversations at Scale

39,00AU$Prezzo

Designing AI Conversations At Scale

Behaviour, Design, and Responsibility

 

What this book is about

 

AI conversations are no longer experimental. They are operational systems embedded in customer support, internal tools, agent workflows, and increasingly autonomous decision-making. These systems speak on behalf of organisations, influence outcomes, and act under conditions of partial understanding.

Despite rapid advances in language models, many conversational AI systems fail in the same predictable ways. They sound confident while being wrong. They rush to resolution before alignment. They escalate too late or not at all. They optimise for metrics while quietly eroding trust.

These failures are rarely caused by weak models or poor prompts. They are caused by design decisions made without a clear understanding of how AI systems behave under uncertainty.

This book is about those decisions.

What makes this book different

Designing AI Conversations Under Uncertainty reframes conversation design as a discipline concerned with system behaviour, internal design mechanisms, and responsibility at scale. It treats conversations as engineered systems composed of flows, decision logic, NLU structures, repair strategies, escalation boundaries, and confidence calibration.

Rather than focusing on tone, scripts, tools, or templates, the book provides durable mental models for designing conversational systems that must operate safely when language is ambiguous, context is incomplete, and consequences matter.

This is not a chatbot handbook. It is not a prompt-writing guide. It is not a customer journey or CX playbook.

It is a book about how AI conversation systems actually behave once deployed into real environments.

What you will explore

 

Inside the book, you will examine:

  • Why customer experience is cognitive and emotional work, not information delivery
  • How conversational structure, turn-taking, grounding, and repair function in practice
  • What intents and entities really represent, and why they often fail at scale
  • How generative AI changes conversational risk, not just capability
  • Why Agent Assist systems can quietly degrade human judgement
  • What changes when AI systems become agentic
  • How responsibility emerges from everyday design and deployment decisions

The book draws on real-world contact centre environments, large-scale AI deployments, and hands-on design practice across customer-facing AI, Agent Assist, and emerging autonomous systems.

Who this book is for

 

This book is written for:

  • Conversation designers and UX practitioners working with AI systems
  • AI practitioners and architects responsible for deployment decisions
  • Product leaders and CX leaders accountable for system behaviour
  • Organisations scaling conversational AI beyond experimentation

You do not need to be new to AI. The book assumes you are already working with conversational systems or preparing to deploy them at scale.

Who this book is not for

 

This book is not for readers looking for:

  • Step by step chatbot tutorials
  • Tool-specific implementation guides
  • Prompt libraries or copy templates
  • Generic Responsible AI frameworks

The focus is on thinking, judgement, and design responsibility, not mechanical instruction.

Why this matters now



As conversational AI becomes more capable, fluent, and autonomous, the cost of poor design increases. Failure becomes subtler, more persuasive, and harder to detect. The question is no longer whether AI systems will fail, but how they fail and who bears the consequences.

If you are responsible for designing, deploying, or governing AI conversations where uncertainty is the norm and failure has real impact, this book is for you.

    bottom of page