Mission Control
Context-Aware NPC Dialogue powered by LLMs.
Project Synopsis
NLP Quest-Giver aims to replace static dialogue trees with dynamic, generative conversations. By integrating a Python-based LLM (like GPT-4o or Llama 3) into Unity, NPCs can understand natural language input.
The core challenge is Structured Output. The AI must not only chat but also trigger game events (Give Quest, Spawn Item) by returning strict JSON data alongside its dialogue.
Why This Matters
Static NPCs break immersion. This project explores Retrieval Augmented Generation (RAG) in games—giving NPCs memory of past conversations and knowledge of the game world state.
Tech Stack
Project Checkpoints
- Phase 1: Semantic Intent Extraction (Understanding)
- Phase 2: Dynamic Quest Generation (Logic)
- Phase 3: The Unity-AI Loop (Action)
- Phase 4: Final Polish & Master's Proof (Showcase)
Field Notes & Learnings
Key engineering concepts for Generative AI in Games.
1. Hallucination Control
Concept: LLMs lie. An NPC might promise the player a "Laser Sword" that doesn't exist in the game files.
Solution: System Prompting & RAG. Feed the LLM a strict list of available items/quests in the prompt (Context). If it tries to invent an item, the Python middleware creates a fallback response.
2. Structured JSON Output
Concept: Unity C# cannot parse a rambling paragraph of text.
Solution: Force the LLM to output JSON. { "mood": "angry", "text": "Get out!", "action": "attack_player" }. This allows C# to trigger animations and game logic programmatically.
3. Latency Management
Concept: API calls take 1-3 seconds. Players hate waiting.
Solution: Use Streaming text (Typewriter effect) as tokens arrive. Alternatively, play a generic "Thinking" animation or "Hmm..." sound effect to mask the loading time.
4. Intent Classification
Before generating dialogue, classify User Intent:
- Greeting: Fast, cheap response.
- Quest Inquiry: Needs deep context check.
- Hostile: Trigger combat immediately (bypass LLM generation).
Implementation
Step-by-step Execution Plan.
Phase 1: Semantic Intent (Week 1)
- UI: Build a chat window in Unity.
- Networking: Send player text to Python via Socket.
- Classification: Use BERT/Zero-shot to detect intent (Greeting/Combat).
Phase 2: Dynamic Quests (Week 2)
- Prompting: Design System Prompt for JSON output.
- Memory: Store conversation history in a Python list.
- Validation: Cross-check generated quests against Game DB.
Phase 3: The Unity-AI Loop (Week 3)
- Parsing: C# script to deserialize JSON response.
- Events: Trigger `QuestAccepted` or `ItemSpawn` events.
- Animation: Sync "Talk" animation with text display.
Phase 4: Final Polish (Week 4)
- Visuals: High-quality NPC model + Lighting.
- DevLog: Video showing emergent/random conversations.
Dev Logs
Engineering notes & daily updates.
Entry 000 Planning
Date: Feb 3, 2026
Project 07 queued for August. Integrating Large Language Models for dynamic NPC interactions.