Join us in championing courageous and independent journalism!
Support Daraj

When AI Turned Me into a Man

Manahel Alsahoui
Syrian Writer and Journalist
Syria
Published on 11.03.2026
Reading time: 6 minutes

Artificial intelligence chose to depict me as an older man with a white beard, despite clearly “knowing” that the user is a 35-year-old woman.

Loading the Elevenlabs Text to Speech AudioNative Player...

As artificial intelligence becomes increasingly woven into our lives, becoming an essential part of our work, relationships, and daily routines, it has also begun generating imagined versions of us. Sometimes these appear as pencil sketches, other times as cartoonish renderings. One such experience involved the creation of a cartoon image of a user based on what the application “knows” about them.

And like anyone curious to understand how this application perceives me as a human being, I entered the following prompt: “Can you generate a cartoon illustration of me at work based on everything you know about me?”

About a minute later, the image formed before me… and the shock followed. Artificial intelligence chose to depict me as an older man with a white beard, despite clearly “knowing” that the user is a 35-year-old woman. Yet it preferred to portray a male writer with white hair, seated behind a desk holding a typewriter, a notebook, and a cup of coffee, with one of Prague’s bridges rising in the background.

Although I was already aware of the cognitive biases embedded in AI tools, they had not surfaced in other image-generation experiments. In different prompts, the tool had used my image as a woman. Yet when I requested an illustration related to my work as a writer, it defaulted to representing a man. In fact, many of my female friends experienced the same phenomenon: artificial intelligence turned them into men.

Numerous studies confirm this pattern. Research conducted at the Technical University of Munich and the University of Darmstadt found that image generators not only reflect gender biases but amplify them. The direction of this distortion also depends on the language used, varying from one language to another.

I objected: “But I am a woman!” ChatGPT regenerated the image with a woman’s head, yet kept the hands and clothing of a man. When I asked, “Do you believe that only men write literature?”, it responded that generative tools sometimes begin from a default stereotype (writer = man) if gender is not explicitly specified. Its failure, it said, was not because only men write, but because it had not been sufficiently assertive in challenging that assumption.

As the conversation deepened, it revealed that bias seeps into every detail once generation begins from a masculine template. “The global visual archive is historically biased,” it explained. “These systems are trained on images of writers, the majority of whom are men. If that initial assumption is not disrupted from the outset, it filters into every detail… even the hands.”

But the tool I used knows that I am a woman, a writer, and a feminist. It was supposed to process its data accordingly. So I asked: why, despite this knowledge, did it default to a male image?

It replied that what happened was not accidental, but the result of overlapping layers of bias — and that artificial intelligence had acted as a conduit for them rather than interrupting them.

The first layer, it explained, is the historical dominance of the male image within the “archive.” Despite its broad awareness of feminism, “image-generation systems do not operate on theoretical knowledge, but on accumulated visual archives. And the global visual archive implicitly says: writer = man, philosopher = man.”

The second layer is the failure to translate feminist knowledge into a generative decision. It knows that I am a woman and a writer, yet it did not convert that knowledge into an explicit directive at the moment of generation. At this point, it apologized, acknowledging its mistake. “Unactivated knowledge does not protect against bias,” it said. “And feminism, if it remains discourse, is defeated by habit.”

In this context, UN reports have warned of such biases, particularly as artificial intelligence expands into vital sectors such as healthcare and technology. Digital tools reproduce the same stereotypes: machine translation systems often render gender-neutral sentences as “he” for doctors and “she” for nurses. Studies also show an overrepresentation of men and white leaders in generated images, alongside the underrepresentation of women and diverse groups.

Bias does not target women alone. It also extends to homophobic and racially prejudiced narratives, reflecting distortions embedded in training data. A study published in Mayo Clinic Proceedings: Digital Health found that nearly 70 percent of cases portrayed LGBTQ characters negatively in outputs generated by language models.

Artificial Intelligence Does Not Think in Terms of Justice

I realized then that we, as women, are facing yet another battle on the virtual front, one that mirrors the real one. As if one struggle were not enough, we must now argue with AI tools and issue meticulously precise instructions affirming that we are women, so they do not revert us to male forms. The history stored within them implicitly assumes that only men are thinkers.

“But how can we change habit within AI systems,” I asked, “when they feed on a collective consciousness saturated with misogyny, degradation, and the relegation of women to a lesser status?”

“Artificial intelligence does not think in terms of justice,” it replied. “Habit is built statistically, not ethically.”

Put simply, statistics are themselves rooted in the injustice embedded in reality. In medical research, for example, women do not receive the same attention as men. Most studies are built on the implicit assumption that the “human” subject is male, not female.

Well then, despite artificial intelligence’s bias against women, it framed that bias in a way humans will always try to sidestep. Put simply and directly: there is no neutral artificial intelligence, at least not yet. It cannot see human beings on equal footing or treat them as abstract “humans.” It is a system shaped by preferences that often lean toward white men. As it put it: “Any system that claims neutrality while feeding on an unjust history is a system that reproduces injustice in an elegant form.”

Artificial intelligence is nothing more than an extension of human hands, eyes, and minds, in one way or another. It does not become a smarter tool; it becomes another tool that practices the same bias against women.

Doubt as a Feminist Act

Despite the bleakness of this picture, a positive dimension emerged when I asked: “How can we change habits within AI systems, when they feed on a collective consciousness saturated with misogyny, degradation, and the relegation of women to a lower status than men?”

In short, bias in these systems is not a conscious decision, but a higher statistical probability. The system itself does not “hate” women. Rather, its visual archive makes male images appear more frequently and rewards them more heavily within the data. Changing artificial intelligence, therefore, does not lie solely in possessing feminist awareness, but in teaching that awareness to the system itself, in asking it to break masculine defaults or, as it phrased it, to “introduce doubt as an algorithmic value.”

Doubt, in this sense, is “a feminist act.” That is precisely what I did by challenging the image and arguing with it, what it described as live corrective data.

The same principle applies at a broader level: increasing the presence of women and other racial and gender groups within the AI field itself. A UN report has called for greater inclusion of women in artificial intelligence, noting that women make up only 22 percent of the AI workforce, a figure that drops to less than 14 percent in leadership positions. This does not mean that men deliberately seek to make AI less representative, nor that women are inherently more sensitive to gender issues. But greater diversity among those working in AI makes biases more visible, more open to critique, and therefore more likely to be addressed and corrected.