Artificial Process Outsourcing
WHY THIS MATTERS
How emerging technologies reorganize information systems, workflows, and decision-making inside organizations
The research program "Artificial Process Outsourcing: Augmentation, Automation, and Global Impacts" is led by Principal Investigator Dr. Brian Jabarian and is a five-year collaboration with PSG Global Solutions, a subsidiary of Teleperformance, signed in July 2024. It is funded by diverse infrastructure grants from Schmidt Sciences, Google Cloud, EV, and others.
A key, underrecognized channel through which emergent technologies will transform the economy is high-volume markets, where entire value chains of repetitive, quantifiable tasks are already being automated end-to-end by cooperating artificial agents and augmented by human-in-the-loop expertise. Working daily with firms reveals pressing infrastructural, organizational, and behavioral challenges that must be tackled now. These questions remain open but require large-scale, causal evidence to inform evidence-based firm strategies and policy for the transition toward more advanced AI systems, ensuring they are developed for the benefit of humans, rather than at their expense.
The program embeds theory-driven natural field experiments in the recruitment process outsourcing (RPO) and business process outsourcing (BPO) sectors, linking randomized treatments to real-world outcomes across millions of job applicants, thereby transforming the sector toward Artificial Process Outsourcing (APO).
Antony Avram, Artificial Agents Coming Together, 2025
FEATURED RESEARCH
Choice as Signal: Designing AI Adoption in Labor Market Screening
with Pëllumb Reshidi
SSRN
We study the design of human-AI screening systems in a hiring environment where applicants choose between interviewing with a human recruiter or an AI voice agent, a choice that creates a new and informative signal. Once firms condition on screener choice, welfare reversals emerge: choice benefits firms and high-ability applicants, but leaves low-ability ones worse off than under a predetermined assignment of the screening technology. Using data from a large-scale field experiment-where 70,000 applicants were randomly assigned to a human interviewer, an AI agent, or allowed to choose between them, we develop a structural estimation to quantify how choice-as-signal shapes match quality and welfare. This framework also allows us to evaluate alternative screening systems, with preliminary results suggesting higher welfare under joint human-AI screening than under either technology alone. Overall, we show that AI adoption in screening is a design problem rather than a simple human-versus-AI substitution decision.
Voice AI in Firms: A Natural Field Experiment on Automated Job Interviews
with Luca Henkel
SSRN
Job interviews are a key stage in hiring through which firms collect information about potential employees, yet they often produce noisy signals of match quality. We study the impact of automating job interviews with AI voice agents. Partnering with a recruitment firm, we conducted a natural field experiment in which 70,000 applicants were randomly assigned to be interviewed by human recruiters, AI voice agents, or given a choice between the two. In all three conditions, human recruiters evaluated interviews and made hiring decisions based on applicants' performance in the interview and a standardized test. Contrary to the forecasts of professional recruiters, we find that AI-led interviews increase job offers by 12%, job starts by 18%, and job retention up to four months by 16-18% among all applicants. Analyzing interview transcripts reveals a key mechanism driving these results: AI agents achieve 'controlled variance'. They follow interview guidelines more consistently, cover a more uniform set of topics, and reduce interviewer-driven dispersion while remaining responsive within each conversation, which is associated with more hiring-relevant information collected from applicants. In response to AI-led interviews, recruiters score the interview performance of AI-interviewed applicants higher, but place greater weight on standardized tests in their hiring decisions. Applicants accept job offers with a similar likelihood and rate both the interview quality and the recruiter similarly in a customer experience survey. Moreover, when offered the choice, 78% of applicants choose the AI recruiter. Overall, our results provide evidence on the types of environments where AI automation may be most effective: by automating noisy stages of information collection, AI can improve human decision-making.
RESEARCH PROGRAM
-
Routing & Autonomy
How should applicants be allocated between AI and human interviewers to maximize skill-adjusted hires, firm surplus, fairness, and candidate agency?Information Integrity
Can real-time multimodal fraud detection keep deepfakes, synthetic voices, and sensitive-attribute leakage under control without harming conversion or equity?Welfare Distribution & Regulation
How are gains distributed between firms and workers? What disclosure and audit regimes secure public trust?Workflow Augmentation-Automation
Which recruitment funnel stages should be automated first, and how do compound automation effects reshape cost-per-hire, time-to-hire, human-in-the-loop tasks and carbon footprint?Contracts, Trade & Markets
How do reskilling, new incentives, and gig platforms (e.g., AgentsOnly) reshape recruiter performance, contracts, wage dispersion, and global talent flows? -
Our team embeds theory-driven natural AI field experiments directly into live RPO/BPO operations worldwide.
Large-Scale: at least a 100K but up to 5 million+ job applicants present in our experiments.
Linked outcomes: offers, wages, retention, recruiter productivity, applicant satisfaction, and downstream client performance.
Unified infrastructure: randomization and causal inference at an industrial scale, with immediate firm strategic and welfare policy relevance.
-
Current and upcoming projects include:
Human–AI Learning: Theory and Field Evidence from Job Interviews
Automated Recruitment Evaluation (benchmarking human vs. multimodal AI prediction)
Next focus areas:
Fraud detection pipelines (deepfakes, identity spoofing, multimodal security).
Gig-economy staffing models and contract design (AgentsOnly).
General equilibrium effects on wages and talent flows across BPO hubs.
Hybrid AI–human workflows (“AI-first, human-escalation” designs).
-
This research program goes beyond traditional AI studies by:
Randomizing live stages of business pipelines
Linking every stage of the RPO/BPO pipeline (not just prediction tasks).
Embedding dynamic human integration rules based on real-time uncertainty.
Incorporating security-conversion layers (multimodal fraud detection).
Operating at global industrial scale, with outcomes tracked to wages, retention, recruiter effort, and applicant satisfaction.
-
Formal DUA enabled by the University of Chicago Booth Business School and PSG Global Solutions, ensuring publication rights prior any data collection.
Original DUA signed July 2024, renewed for 5 years (2025–2030).
Dr. Jabarian became Chief Economist at PSG in an unpaid advisory role in August 2025 to lead this new 5-year program
All field experiments are pre-registered on AEA RCT Registry
All field experiments are IRB-approved at the University of Chicago Booth Business School.
Data stored and analyzed in our secure Booth Google Cloud Platform, established through a cloud partnership with Google.
-
Item description
TEAM MEMBERS
-
Brian Jabarian
Principal Investigator
Brian is the Howard & Nancy Marks Fellow at the University of Chicago Booth School of Business
-
Luca Henkel
Co-Author
Luca is an Assistant Professor of Finance at the Erasmus School of Economics. -
Pellumb Reshidi
Co-Author
Pellumb is an Assistant Professor of Economics at Florida State University.
-
Andrew Koh
Co-Author
Andrew is a Ph.D. Candidate in Economics at MIT.
-
Ruru Hoong
Co-Author
Ruru is a Ph.D. Candidate in Business Economics at Harvard Business School
-
Eugenio Piga
Co-Author
Eugenio is a Ph.D. Student in Economics at UCSD.
-
Mariya Pominova
Co-Author
Mariya is a Ph.D. Candidate in Economics at Duke University.
-
Bernard Chen
Research Assistant
Undergraduate Student, University of Chicago
-
Marco Di Giacomo
Research Assistant
-
Shubhaankar Gupta
Research Assistant
Undergraduate Student, University of Chicago
-
Andrew James
Research Assistant
Master Student, University of Chicago
-
Ziyue Feng
Research Assistant
-
Rishane Dassanyake
Research Assistant
OUR FIRM PARTNER
Psg global solutions is a wholly-owned subsidiary of teleperformance se, the French-headquartered global leader in outsourced business services, with ~500,000 employees and over €10 billion in annual revenue.
Teleperformance acquired PSG in October 2022 for about $300 million, and PSG continues to operate under its existing leadership within Teleperformance’s specialized services portfolio. Before (and at the time of) the acquisition, PSG was a high-growth S. Recruitment process outsourcing (RPO) firm with roughly $75 million in annual revenue, ~4,000 employees, and a client base across healthcare and other high-volume sectors, combining human recruiting expertise with digital automation to scale talent acquisition.
OUR FUNDERS
SOCIAL IMPACT
CBS (press release)
Barchart (press release)
Yahoo Finance (press release)
Booth Center for Applied AI (interview)
Business Insider (quote)
Financial Times (mention)
Fortune (mention)
Rest of World (mention)
HRM Outlook (mention)
HR Tech Cube (mention)
Kyla Scanlon’s Newsletter (mention)
Nasdaq (mention)
El Espectador (mention)
eWeek (mention)
Morning Brew (mention)
ReWorked (mention)
Greg Isenberg’s post (mention)
Ethan Molick’s post (mention)
Talent Edge (mention)
Au Feminin (mention)
TF1 (mention)
