In recent years, job hunting has gone almost entirely digital. Platforms such as LinkedIn, Naukri or Apna have become the starting point for individuals fresh out of colleges and looking for their first job. These job portals promise to simplify the hiring process by using smart algorithms that automatically sort, filter and recommend candidates to employers. At the first glance, this seems like a more efficient way to match people to jobs. But here's a question worth asking: Are these algorithms unintentionally reinforcing inequality?
As algorithms increasingly shape hiring decisions, it becomes crucial to examine whether these systems are genuinely creating equal opportunities for all, or whether they are subtly reinforcing long-standing social and economic divides. When not carefully designed, these automated processes can inadvertently prioritize certain backgrounds, while sidelining others who may already face barriers to employment. Ironically, the technologies intended to make hiring more efficient and inclusive end up creating additional hurdles for those who most need better access to stable job opportunities.
This blog explores whether job portals might be disadvantageous to candidates from lower income or underrepresented backgrounds, based on where they live, where they studied and how digitally fluent they are. We’ll draw on recent research on labour economics and public policy to see how algorithmic decisions may be strengthening the existing gaps instead of closing them, especially in a country like India where class, caste, and location continue to shape life & work opportunities. Before diving into the biases, let’s take a closer look at what happens behind the scenes on job portals and how algorithms actually decide who gets noticed and who gets overlooked.
How Job Portals And Their Algorithms Work
At the heart of modern job portals are algorithms designed to simplify the overwhelming task of matching millions of job seekers with potential employers. When a candidate uploads a resume, the platform's algorithm scans it for key details such as educational qualifications, work experience, skills, certifications, location, and even the formatting of the document. The system then ranks and recommends candidates based on how closely they appear to match the job’s requirements. These algorithms are also responsible for automatically filtering out resumes that don’t meet certain thresholds, determining who gets seen by recruiters and who doesn’t. While this automation brings speed and scale, it also opens the door to unintended biases that can influence who gets considered for job opportunities.
What Is Algorithmic Bias?
Algorithmic bias refers to systematic errors that occur when algorithms reflect or amplify social prejudices often embedded in the data they're trained on. Since these systems rely on historical data, which may already carry patterns of inequality or exclusion, algorithms can inadvertently reproduce these patterns in ways that seem neutral on the surface. This becomes especially concerning in contexts like hiring, where even small biases in data or design can translate into significant real-world consequences. Before moving into specific examples, it’s important to understand how algorithms can unintentionally favor or disadvantage certain groups based on hidden patterns in data. One such factor influencing hiring decisions is a candidate’s residential location or zip code.
Digital Disadvantage: Address-based Filtering
One visible gatekeeper is the candidate's residential address or pin code. Algorithms often down rank resumes based on location, assuming that distance or neighbourhood quality might affect job readiness. Employers may unconsciously use a candidate’s zip code as a clue about their social or economic background, assuming that people from certain neighborhoods are less suitable for the job, even if they have the right skills and qualifications. What seems like a neutral data point just an address can discreetly shape hiring decisions long before any real assessment happens. Without anyone noticing, some candidates may already be pushed down the list simply because of where they live. This bias doesn’t just stop at employers, it can quietly shape how job portals use location data to filter resumes before a human even sees them.
When job portals collect location data as part of their profile-building or resume-filtering processes, these biases risk quietly entering the algorithm’s decision-making. A candidate from a low-income neighborhood could be automatically ranked lower not because of their skills or potential, but simply because of where they live. This silent filtering effect is particularly concerning in algorithm-driven platforms, where these decisions happen at scale and without human oversight. Beyond location, other digital factors like resume formatting and an applicant's familiarity with digital tools can also influence how algorithms interpret and rank candidates.
Digital Disadvantage: Formatting & Digital Literacy
Algorithms are also programmed to pick up on resume formatting, keywords, and use of “professional language”. Resumes with clean layouts, PDFs with bullet points, English summaries, and role-relevant terminology are prioritized, while those in local languages or with unconventional formatting may be overlooked. This shift places digital fluency above actual job skill. In many ways, the way you format your resume can matter as much or even more than what’s actually on it.
This creates an advantage for those familiar with the unspoken rules of digital hiring.This issue isn't just theoretical, it plays out in real job searches every day.
This becomes clearer when we look at what happened to a real job seeker. An Economic Times article describes a job seeker who applied to 47 roles but received only three callbacks. His resume was repeatedly rejected, not due to lack of qualifications, but because it wasn’t optimized for Applicant Tracking System filters. After an HR insider advised him to use AI-powered resume optimization tools, his response rate tripled. This underscores how presentation over substance can determine visibility, and how job portals disproportionately benefit those who know how to “game” the system. While technology has made hiring faster, it has also created new kinds of barriers. This raises an important question: are these biases truly caused by algorithms, or are they amplifying biases that already exist in society?
Are These Biases Unique To Algorithms?
Many of the biases that affect hiring are not new. Human recruiters have long been influenced often unconsciously by things like a candidate’s accent, name, educational institution, or even resume formatting. For instance, Montgomery and Acheme (2022) found that speakers with non-standard English accents, such as Indian Tamil English, were often perceived as less competent, even when their content matched that of standard-accent speakers. Studies show that job applicants with names signaling caste or community identity in India are less likely to receive interview calls even if their qualifications are the same as others.
These patterns show that hiring discrimination is not limited to one country or culture. Biases based on names, accents, or background appear across the world, though the specific forms may differ. Understanding how these patterns play out in different contexts helps us see how widespread and persistent such issues are.
Similarly in the U.S., applicants with traditionally African-American names face lower callback rates compared to those with White-sounding names, despite having identical resumes. What makes algorithmic hiring different is not the presence of bias itself, but how these biases are amplified and systematized. In human-led hiring, individual recruiters may occasionally overlook or correct for irrelevant signals. But algorithms process every resume with rigid consistency, leaving no room for judgment or second chances.
If algorithms are trained on historical hiring data that already reflects social and economic biases, they may learn to associate certain locations, educational institutions, or demographic markers with lower hiring success. Real-world examples of this risk have already emerged. For instance, Amazon scrapped an internal AI recruitment tool after it was found to systematically downgrade resumes that contained indicators of female candidates, an unintended consequence of the algorithm being trained on years of male-dominated hiring data. In short, while conventional hiring biases are well-documented, algorithms make these biases more systematic, less visible, and harder to correct because the filtering happens automatically and at scale.
What Can Be Done?
The growing reliance on algorithmic screening makes it even more critical to address these biases proactively. While technology can improve efficiency, it must also ensure fairness and equal access to employment opportunities for all candidates. Several interventions can help reduce these unintended disadvantages. Addressing algorithmic bias in job portals requires thoughtful design choices. Talented candidates risk being overlooked simply because the algorithm focuses on surface-level details instead of actual ability. To tackle this growing issue, it’s important to move beyond just identifying the problem. The next step is figuring out how we can design systems that don’t subtly sideline capable candidates based on flawed filters.
One way forward is to introduce fairness metrics into applicant screening systems, ensuring that candidates from underrepresented backgrounds are not systematically excluded due to irrelevant factors like their address, educational pedigree, or resume formatting. These metrics can help flag and correct disproportionate filtering before qualified candidates are lost in the process. But improving the system alone isn’t enough. Platforms also need to empower candidates with the tools to navigate these algorithms more effectively. Additionally, platforms can support resume-building tools tailored specifically for first-generation job seekers who may lack guidance on creating resumes that meet algorithmic standards. By offering easy-to-use templates, keyword suggestions, and formatting guidelines, job portals can help level the playing field for applicants who are otherwise digitally disadvantaged.
Ultimately, creating more inclusive hiring technologies demands not just better algorithms, but a deeper awareness of the varied pathways through which people seek work and the hidden obstacles they may face along the way. As India’s job market moves further online, we must ask not just how efficiently platforms match people to jobs, but who they’re leaving behind in the process.
Samata Mhaskar