Mission Control
Central Command for Project Neural-Nav.
Project Synopsis
Neural-Nav v1.0 is a research infrastructure prototype designed to bridge the gap between simulation and intelligence.
It establishes a raw, bidirectional TCP/IP Socket Handshake between a Unity environment (The Body) and a Python script (The Brain). Unlike standard frameworks like ML-Agents which are "Black Boxes," this project is built from scratch to provide byte-level control over the data pipeline.
Why This Matters
To master Reinforcement Learning (RL), one must master the environment. By building the "Nervous System" manually, we learn exactly how latency, data serialization, and synchronization affect AI decision-making.
Tech Stack
Project Checkpoints
- Phase 1: World Architecture (Environment)
- Phase 2: The Physical Brain (Physics)
- Phase 3: The Handshake (Networking)
- Phase 4: Deployment (Demo & Docs)
Field Notes & Learnings
Key engineering concepts mastered during this project.
1. The "Two-Language" Problem
Concept: Game engines use C++ or C# for performance, while Data Science uses Python for flexibility. They don't speak the same language natively.
Solution: We treat the simulation as a "Server" and the AI as a "Client" (or vice versa) and communicate via a universal protocol: TCP Sockets. This completely decouples the simulation logic from the learning logic.
2. Synchronous Networking
Concept: Real-time games run at 60fps. If the Python brain is slow, the game keeps running, causing the agent to act on old data.
Solution: We implement a Blocking Handshake. Unity captures state -> Pauses -> Sends Data -> Waits for Python -> Python Replies -> Unity Unpauses. This ensures frame-perfect alignment.
3. Data Serialization
Concept: Sending raw memory (pointers) across processes is impossible. Data must be flattened.
Solution: We use JSON. It is human-readable and supported by both languages.
Unity: `JsonUtility.ToJson(state)`
Python: `json.loads(data)`
4. State-Action Architecture
We strictly define the world in two concepts:
- STATE (St): What the agent sees (Position, Rotation, Raycast distances). This is Read-Only.
- ACTION (At): What the agent does (AddForce X, AddForce Z). This is Write-Only.
Implementation
Step-by-step Execution Plan.
Phase 1: World Architecture (Days 1-7)
- Unity Setup: Create `Main_Prototype.unity` (Plane + Cube).
- Assets: Import `Target.fbx` (Green) and `Obstacle.fbx` (Red).
- Validation: Write `CoordPrinter.cs`. Log Vector3. Verify +Z is Forward.
Phase 2: The Physical Brain (Days 8-14)
- Script: Create `AgentMotor.cs`. Cache `Rigidbody rb`.
- Physics: Implement `MoveAgent` using `ForceMode.VelocityChange`.
- Collision: Reset Position on 'Wall' hit.
Phase 3: The Handshake (Days 15-21)
- Python: Write `brain_server.py`. Bind socket to 127.0.0.1:12345.
- Unity: Write `SocketManager.cs` with `TcpClient`.
- Thread Safety: Run socket reads in separate Thread/Task.
Phase 4: Docs & Polish (Days 22-25)
- Demo: Record split-screen video (Unity + Terminal).
- Deploy: Update Portfolio and GitHub Readme.
Dev Logs
Engineering notes & daily updates.
Entry 000 Project Init
Date: Feb 3, 2026
Repository initialized. Mapped to `== PROJECTS ==`. Phase 1 started.