Using a mixed-methods approach, data was collected through quantitative surveys (N=258) and qualitative interviews with AI practitioners and decision-makers across multiple industries. Findings indicate that AI significantly improves decision efficiency by automating analytical tasks, reducing human cognitive biases, and enabling real-time insights. However, challenges persist, particularly in algorithmic transparency, ethical governance, and compliance with regulatory standards. Key findings reveal that AI integration positively influences decision effectiveness (β=0.156, p=0.031), but human oversight (β=0.381, p<0.001) and regulatory compliance (β=0.314, p<0.001) play crucial mediating roles. Ethical and security challenges necessitate stronger AI governance frameworks, as organizations struggle with bias mitigation, legal accountability, and AI explainability. Industry experts emphasize the need for a hybrid Human-AI collaboration model, ensuring AI remains an augmentation rather than a replacement for human decision-makers. This study contributes to AI governance literature by highlighting the importance of ethical AI deployment, transparent decision systems, and regulatory adherence. Future research should explore AI’s impact in high-risk sectors, develop proactive AI compliance strategies, and examine cross-national AI regulatory frameworks to enhance responsible AI adoption globally.