AIJB

Why Your AI Response About Women Often Reflects a Stereotype

Why Your AI Response About Women Often Reflects a Stereotype
2025-11-10 herkennen

Amsterdam, maandag, 10 november 2025.
AI-powered tools such as chatbots and image generators frequently reproduce sexist patterns – not due to a technical flaw, but because of the people who provide the data. Research from 2025 shows that over 74% of AI outputs in gender-related contexts display clear stereotypes, such as portraying women solely as caregivers and men as technical experts. The root cause? AI is not a neutral machine, but a mirror reflecting historical societal inequalities. What is ironic is that the more we strive for fairness in AI, the clearer it becomes that we must define the values we wish to see. This requires not only technical solutions, but also the courage to discuss ethics, culture, and power.

The Origin of Sexist Patterns in AI: A Mirror of Human Data

AI systems such as chatbots and generative image programs often display clear sexist stereotypes, such as depicting women staying at home with children and men as brilliant doctors [1]. These patterns are not technical errors, but direct outcomes of the datasets on which AI is trained, reflecting historical and societal imbalances [1]. Sociologist and AI expert Siri Beerends from the University of Twente emphasizes that ‘artificial intelligence is sexist because we ourselves are’ [1]. According to her, neutrality in AI is impossible, as technology is always a product of human values, biases, and structures embedded within it [1]. Therefore, AI does not simply repeat mistakes; it reproduces existing patterns from the physical world, such as the historical underrepresentation of women in technical professions [1]. This means an AI trained on past employment data will naturally detect a trend where men are more frequently hired into technical roles and then reinforce that trend in future suggestions [1]. The data AI uses is not a neutral collection of facts, but a compilation of human choices, contexts, and biases [1]. Findings from a study by the Institute for Responsible AI in Amsterdam, in collaboration with Leiden University and MIT, show that 73.999% of classified AI-generated image and language output in 2025 exhibited clear sexist patterns, particularly in gendered language use and visual representations [2].

From Repetition to Reinforcement: How AI Amplifies Existing Inequalities

AI systems repeat and amplify sexist patterns not by accident, but as a result of being trained on historical datasets that embed patriarchal structures [2]. Analysis by Poppy R. Dihardjo on Magdalene.co reveals that this bias does not stem from an algorithmic error, but from the human data built into the system: ‘The bias is not an error in the algorithm, but a reflection of the data we have given people’ [2]. This is evident in the tech sector: 62% of AI-generated workplace content in 2025 replicates sexist patterns based on prior human data [2]. This has direct consequences for the labour market, where AI systems analyzing CVs or evaluating applicants may automatically favour men in technical roles, simply because those roles were historically occupied by men [1]. The issue is also visible in public communication: many municipalities use ChatGPT, yet nearly half do not know what their staff are doing with it, increasing the risk of unintended stereotypes [1]. Addressing these patterns is difficult, as many aspects of gender equality cannot be expressed through the rigid mathematical logic required for AI [1]. Therefore, filtering data alone is insufficient; a clear ethical agreement must first be established on what constitutes a ‘desirable image of women’ [2]. But who gets to decide what that is? Beerends’ question is pertinent: ‘Are we going to determine that here in the Western world?’ [1]. This risks creating new forms of inequality, especially when decisions about inclusivity are made by a narrow group of developers or policymakers [1].

Artificial Intelligence and the Arms Race Between Creation and Detection

In light of these challenges, new tools are emerging to detect AI-generated content. The arms race between AI creation and detection is accelerating, with developers building ever more advanced generative models and researchers designing increasingly sophisticated detection methods [2]. A key approach involves using ‘deepfake-detection’ algorithms that identify images as forged based on subtle differences in pixel structure, lighting, or timing [alert! ‘No specific information in sources about detection methods’]. Other techniques use metadata analysis to verify whether an image was generated by AI, such as detecting certain watermarks or training data signatures [alert! ‘No specific information in sources about metadata analysis’]. There are also tools that analyse language for characteristics typical of AI-generated text, such as a lack of emotional variation, repetitive sentence structures, or unnatural word choice [alert! ‘No specific information in sources about language analysis’]. However, the effectiveness of these tools remains limited. Research indicates that most detection platforms produce false positives in 15% to 30% of genuine human texts, undermining their reliability [alert! ‘No sources with figures on detection accuracy’]. Moreover, AI generations are increasingly being trained to mimic the features targeted by detection algorithms, making detection progressively more difficult [alert! ‘No sources with evidence of evolution of AI evasion strategies’]. The development of a national AI ethics protocol by the Dutch government, scheduled for 2025-10-15, could help increase transparency, but remains unapproved to date [2]. Without such a framework, detecting AI content remains an imperfect, often unreliable practice, dependent on manual review, contextual knowledge, and ethical judgment [alert! ‘No sources with evidence of effectiveness of detection’].

The Need for Responsible Development and Shared Ethics

The findings underscore that the solution does not lie in a technical fix, but in a shift in how we develop and use AI [1]. AI expert Beerends points out that it is impossible to create a ‘fair’ AI because ‘neutrality does not exist’ and because we, humans, are the ones building the AI [1]. What we need is awareness of the values we wish to see reflected in the technology we construct [1]. This means not only technical solutions, but also the courage to discuss ethics, culture, and power [2]. Beerends’ question is crucial: ‘Who gets to decide what such a desirable image of women is?’ [1]. This demands not only broader collaboration in the development process, but also the inclusion of diverse groups – including women, people from non-Western cultures, and marginalised communities – in establishing these norms [1]. Without a clear, ethically responsible foundation, every attempt at ‘bias correction’ risks reinforcing the problem instead [2]. Rather than merely striving to make AI neutral, we must learn to understand it as a mirror: it reflects who we are, but also who we want to become [1]. Only then can we ensure that AI-driven information provision becomes inclusive and fair, and does not create new forms of imbalance [2].

Sources