This paper explores the limitations and potentials of ChatGPT in gender expression, with a particular focus on its impact on gender bias and stereotypes. As a generative model relying on vast training data, ChatGPT often reflects societal gender biases and stereotypical representations of traditional gender roles in its content generation. This phenomenon may lead ChatGPT to unintentionally reproduce or even reinforce these biases, further solidifying associations of "male dominance" and "female subordination," thereby limiting the diversity of gender role expressions. Adopting a feminist perspective, this study reviews core feminist principles and analyzes the potential of ChatGPT-4 and its image generation model, DALL-E, to express female subjectivity, promote diverse representations, and challenge stereotypes. Additionally, this paper reveals the technical and ethical challenges posed by data bias and algorithmic prejudice in ChatGPT, highlighting their implications for gender-equitable expression. To achieve more inclusive and diverse gender representation, this study proposes future directions for improvement, aiming to advance ChatGPT’s development from a feminist perspective through diversified and balanced data handling and optimized generative model structures.