Traditional driver monitoring systems rely on fixed thresholds to detect distraction, often leading to false alerts and poor driver acceptance. This presentation introduces a context-aware, behavior-adaptive driver monitoring framework that dynamically assesses driver risk using in-cabin vision and vehicle motion signals. The system fuses RGB and near-infrared vision with IMU-derived motion features to estimate driver attention, driving workload, and operational context in real time. Lightweight, edge-optimized neural networks perform head pose, gaze, eyelid state, and handheld device detection under diverse lighting and cabin conditions. Rather than applying uniform alert thresholds, the system adapts sensitivity based on individualized driver behavior and current driving demand, increasing responsiveness during complex scenarios while reducing alerts during low-risk conditions. All inference is performed fully on-device to meet automotive latency, privacy, and reliability requirements. Results demonstrate reduced false alerts and improved driver acceptance, supporting scalable aftermarket deployment and OEM integration.