Traditional paradigms of interface evaluation, ranging from heuristic inspections to post-hoc usability testing, exhibit significant limitations in scalability, adaptability, and sensitivity to contextual user behaviors. Recent advances in automated evaluation, often powered by machine learning, provide computational efficiency but remain constrained by their inability to capture tacit, experiential, and affective dimensions of usability. This study advances a usability-driven optimization framework underpinned by a human-in-the-loop (HITL) paradigm, wherein iterative human feedback dynamically informs algorithmic adaptation throughout the evaluation cycle. The proposed approach operationalizes usability as a multidimensional construct, integrating quantitative performance indicators (task completion latency, error propagation rate, and interaction efficiency) with qualitative indices (perceived workload, cognitive friction, and affective resonance). A hybrid evaluation pipeline is conceptualized in which algorithmic models perform baseline assessments, while human evaluators inject corrective signals, constraint refinements, and context-aware judgments to recalibrate optimization trajectories. It has been demonstrated through empirical prototyping that this symbiotic evaluation mechanism improves the strength and ecological genuineness of interface evaluations and provides statistically significant changes in the System Usability Scale (SUS) scores and interaction achievement scales. The research creates a methodological intersection between automated evaluation architecture and user-centered design epistemologies by foregrounding usability as the goal to be optimized instead of a diagnostic of the design, which occurs after the design. The paradigm has referential to adaptive interface engineering, human-computer symbiosis, and the incorporation of affective computing in the next-generation usability analytics.

