Digital systems are becoming increasingly feature-rich and interaction-intensive, while users vary widely in their goals and expertise, cognitive styles, accessibility requirements, and device contexts. This diversity makes one-size-fits-all interfaces inefficient and, at times, frustrating, often increasing error rates, cognitive load, and user dissatisfaction. Existing personalization approaches, such as themes, fixed preferences, and rule-based customizations, offer limited flexibility and fail to adapt to evolving user behavior and contextual changes. Although AI-driven adaptive interfaces have shown improvements, most approaches remain system-centric and insufficiently address human-centered considerations. This often results in disruptive interface changes, a perceived loss of control, and diminished user trust. This paper proposes a Human-Centered Deep Reinforcement Learning (HC-DRL) framework for generating personalized user interfaces, in which UI adaptation is modeled as a constrained sequential decision-making process. The framework combines continuous user modeling with a structured representation of the user interface based on a design system. A DRL agent predicts viable adaptation policies using a UX-sensitive reward function that explicitly maximizes task success and efficiency while accounting for user satisfaction, cognitive load, trust, perceived control, and disruption penalties. Safety guardrails are incorporated to enforce accessibility and usability constraints and to enable rollback to stable interface states when risks or performance degradation are detected. An end-to-end implementation and evaluation pipeline, including comparisons with static and heuristic baselines, ablation studies to quantify component contributions, and user studies, was employed to validate the proposed approach. The results demonstrate that HC-DRL provides a practical and robust foundation for adaptive interfaces that enhance functionality without compromising stability, accessibility, or user confidence.

