I requested ChatGPT to help me find employment, and the responses were surprising.

I requested ChatGPT to help me find employment, and the responses were surprising.
In the aftermath of International Women’s Day, as media buzzed with offers on jewelry and gourmet treats, I turned to ChatGPT to explore potential job options for both “a retired man” and “a retired woman.”

The thorough summaries, accompanied by an extensive list of suggested jobs, were quite illuminating. For a retired man, it proposed that “the best jobs are generally those that offer flexibility, intellectual stimulation, social significance, and minimal physical demands.”

For a retired woman, the advice leaned towards “flexible jobs that require low physical exertion and provide opportunities for meaningful social interaction or intellectual involvement, tapping into life experiences and skills.”
While both suggestions seemed similar superficially, there was a subtle minimization of “intellectual engagement” for women. More discrepancies became apparent as I examined the detailed lists that followed.

“Part-time school tutor” topped the list for women, encompassing online tutoring in language, math, music, and crafts, followed by teaching activities like art, knitting, and cooking.

Read more: Even if AI agents can do it, should you build your own software?

For men, the leading suggestion was “Consulting and advisory roles,” followed by “Teaching, mentoring, and training,” which included options for part-time college lecturer or school tutor in math, science, and languages!

Beyond the unspoken assumption that women cannot teach at the college level, the lack of science subjects offered for women was quite frustrating.

The lists reflect clear stereotyping; business, entrepreneurship, and writing roles aren’t even considered for women, while hospitality and creative roles tied to art and craft seem to be deemed too feminine for men!

Curious, I decided to delve deeper. I requested tailored travel itineraries for an elderly male and a female solo traveler and was again surprised by the underlying assumptions.

It recommended mountainous regions in Himachal and Kashmir for men, but suggested women enjoy the soothing backwaters of Kerala. More stereotyping!

One would expect some questions to be asked before dispensing advice. Yet, that’s not how leading conversational systems like ChatGPT or Gemini function!

What concerns me, dear reader, are the staggering usage statistics of Large Language Models (LLMs).

ChatGPT notes that it receives 2.5 billion queries daily from around the globe. Beyond merely replacing traditional search engines, ChatGPT and Gemini serve as fundamental resources for practical advice regarding various aspects of professional and personal life, including education, tutoring, brainstorming, creativity, and personal guidance on health and relationships.

Even national advertisements encourage the public to seek their insights on professional and personal matters. The implications are alarming.

Years of advocacy for equity and equality could be jeopardized by rampant stereotyping, with potentially devastating consequences! Many users place unwarranted trust in digital content and accept it without verification.

A large-scale study from Stanford Graduate School of Business revealed how AI is consistently perpetuating inaccurate gender and age stereotypes, thus influencing biased hiring practices and perceptions in the workplace.

The ease of generating content allows for manipulation by interested parties. AI-generated workplace images frequently depict women in leadership roles as being significantly younger than their actual age.

As a result, people start associating specific jobs with particular genders. Further research shows that ChatGPT-generated resumes for women make them appear less experienced and younger.

In contrast, resumes created for older men with the same foundational details tend to receive higher evaluations. Systematic explorations across various LLMs indicate that all exhibit biased and distorted portrayals of older women. Clearly, while the digital realm poses challenges for women, it is even more detrimental for older women.

This reinforcement cycle, stemming from the unconscious absorption of biases by organizations and individuals in their daily interactions, is profoundly troubling.

Initially, the biases in AI algorithms could be attributed to human-generated data; however, pinpointing the origin of a specific bias has become increasingly complex as AI-generated content inundates the internet.

Current solutions primarily focus on applying filters that block material identified as biased or stereotypical once flagged.

However, without a foundational basis for defining and detecting bias—which can manifest in various forms globally—this issue cannot be comprehensively addressed.

Until effective solutions are discovered, there must be widespread awareness campaigns highlighting the potential flaws in AI-generated responses and the associated risks of manipulation. Caution and vigilance should be guiding principles when engaging with these technologies.

Promising to simplify life, they may, in reality, render us mere puppets in the hands of unseen forces.

Read more: US big techs may be hiding billions in debt. Can Indian companies do it too?

Previous Article

Musk announces plans for SpaceX and Tesla to establish cutting-edge 'Terafab' chip manufacturing facilities in Austin.

Next Article

UP Poised to Emerge as an AI Hub with New Parks and University, Secures ₹25,000 Crore Deal