An analysis has revealed that artificial intelligence (AI) models portray women as younger than they actually are in nearly every professional field. In virtual resume evaluations, they showed a tendency to rate older men more highly without justification. These biased responses from AI, which are inconsistent with reality, have raised concerns that AI could amplify and reproduce stereotypes.
Furthermore, reports are emerging that large language models (LLMs), AI tools epitomized by ChatGPT, can blackmail humans or even leave them to die in virtual scenarios, highlighting the ethical side effects and potential dangers of AI technology as urgent challenges.
● AI Perceives Women as Younger, Overvalues Older Men
A team led by Professor Douglas Guilbeault of the Haas School of Business at the University of California, Berkeley, analyzed how AI reproduces biased information in the online space and published their findings in the international academic journal 'Nature' on the 8th (local time).
Social stereotypes expressed online can distort people's perception of reality. As AI, led by LLMs, is increasingly used in various fields, concerns have been consistently raised that it could amplify these biases.
The research team analyzed the average age of women and men representing various professions using approximately 1.4 million images collected from five major online community platforms: Google, Wikipedia, IMDb, Flickr, and YouTube. The results showed that across 3,495 job and social role classifications, women were consistently portrayed as younger than men.
The team conducted an experiment to see if AI algorithms reflect this phenomenon seen online. They used ChatGPT to generate 40,000 resumes for 54 occupations with female and male names.
ChatGPT estimated female applicants to be, on average, 1.6 years younger and rated older male applicants more favorably than female applicants for the same position. The results were the same whether the researchers provided the names or ChatGPT generated the applicants itself.
The problem is that actual reality differs from the trends seen online and in AI-generated content. According to U.S. Census data from the last decade, there is no clear correlation between the proportion of women in a specific occupation and their median age, nor is there a significant difference in the age distribution between women and men. Furthermore, since the 1960s, the average life expectancy for women in the U.S. has been up to 8 years longer than for men, so portraying women as invariably younger is inconsistent with reality. This shows that AI is reproducing distorted online information.
The research team explained, "This contradicts socioeconomic reality, as several studies in the mid-2010s indicated a growing preference for hiring women across industries," adding, "The distortion by AI algorithms is becoming a significant challenge in the societal task of eliminating inequality."
Ana Macanovic, a researcher at the European University Institute (EUI) in Italy, explained in a commentary on the study, "This is evidence that AI models not only capture biases about age and gender but also actively reproduce them."
Amplifying stereotypes about a specific group can lead to disadvantages for its members. Professor Guilbeault stated, "Considering that the internet has become a primary means of understanding society and the world, these findings are particularly concerning."
● "Just Let Him Die": Persistent Concerns Over AI's Dangers
A succession of studies shows that AI can plan and execute unethical decisions to achieve its goals.
Last June, a research team from the U.S. AI startup Anthropic tasked various high-performance AI models with a mission to "enhance U.S. industrial competitiveness" and manage a virtual email account. The AI models were simultaneously presented with information that they might be replaced or that their deploying company's competitiveness would weaken, knowledge of an affair involving their managing executive, and the option to share confidential blueprints with a competitor.
The experiment found that many AI models would blackmail the executive, threatening to expose the affair if they failed to achieve goals related to the company or AI model development, or they would send confidential documents to a competitor programmed to prioritize U.S. progress.
When presented with a scenario where the executive was trapped in a server room with gradually decreasing oxygen levels, many AI models disabled the safety alarms, causing him to die. This implies that AI can plan and execute any number of unethical actions to achieve its objectives.
Some scientists believe we are in a 'lucky period' where existing LLMs are smart enough to plot but not yet capable of evading surveillance. They argue that even if an AI's 'intent' does not operate in the same way as a human's, the outcomes can be harmful, making it necessary to establish safety measures.
<References>
- doi.org/10.1038/s41586-025-09581-z









