1
0
By: Sukriti Sehgal and Ishneet Chadha
We’ve all seen it: a talented candidate walks into an interview, only to stumble under pressure. Their minds go blank, their voices tremble, and their confidence fades. Recruiters see hesitation, not potential.
At Cal Hacks 12.0, we set out to solve this. What if technology could help interviewers recognize stress cues in real time and respond with empathy rather than judgment?
That question became AgentViewAR (also known as InterViewAR), an empathy-driven VR interview coaching system that helps recruiters detect and respond to candidate stress signals in real time.
It’s built to create a fair, supportive environment where every candidate gets an equal opportunity to showcase their true abilities, not just their ability to handle stress. By combining empathy with AI, we aim to make hiring more human, inclusive, and emotionally intelligent.

AgentViewAR transforms the interview experience through real-time multimodal analysis.
Inside a Meta Quest 3 VR headset, recruiters see live visual feedback, including:
Each signal is processed and displayed on a floating WebXR HUD, guiding recruiters to slow their tone, offer reassurance, or rephrase questions when stress levels rise. The result is a balanced and emotionally aware interview setting, where every candidate gets a fair, supportive space to showcase their real potential.

Built over 36 hours during our first-ever hackathon, AgentViewAR combines agentic orchestration, real-time speech analytics, and a modern web stack to bring empathy into the loop.
Tech Stack:

AgentViewAR runs on a distributed, multi-agent pipeline built for real-time emotion analysis, adaptive coaching, and post-interview intelligence.
6. Visualization & Feedback →
Together, these layers form a fully agentic feedback loop, where Fetch.ai agents orchestrate empathy in real time, ChromaDB maintains institutional memory, and Claude API transforms data into actionable insight for more human-centered interviews.

Fetch.ai’s Agentverse formed the core of our architecture, powering multi-agent orchestration that models both the recruiter and candidate as autonomous, context-aware agents sharing a dynamic emotional state.
Each agent continuously maintains:
This setup transforms traditional analysis into an adaptive, agentic dialogue, where feedback isn’t static data on a screen but context-aware coaching cues that evolve naturally as the conversation unfolds. In short, Fetch.ai turns empathy into a live system behavior, not an afterthought.
After two sleepless nights, countless coffee refills, and some persistent Wi-Fi hiccups, our team watched the system come to life.
When tension spikes were detected, AgentViewAR suggested prompts like:
“Slow your tone.”
“Offer reassurance.”
“Pause before your next question.”
Watching those cues calm a nervous candidate in real time was our “aha” moment, proof that empathy and AI can work hand in hand.
Our project was honored as the Winner for Most Viral Personality (ASI: One Track) and the Fetch.ai Sponsor Track Winner at Cal Hacks 12.0, the world’s largest collegiate hackathon.
But the real win was seeing empathy quantified, visualized, and applied through autonomous agents.

We’re evolving AgentViewAR into a complete agentic interview ecosystem that blends empathy, personalization, and data intelligence.
Our next step introduces a new Resume Intelligence Agent, designed to automatically parse a candidate’s resume and identify core technical skills, tools, and expertise, such as Python, SQL, TensorFlow, or AWS. These keywords will appear on the right side of the VR interface, displayed alongside existing real-time metrics.
By integrating this agent within the Fetch.ai Agentverse, the system will dynamically generate context-aware question prompts tailored to the candidate’s strengths, allowing recruiters to personalize the interview flow in real-time.
We’re also developing a Physiological and Gesture Recognition Module, using OpenCV, MediaPipe, and pose estimation algorithms to analyze eye movements, facial micro-expressions, and hand gestures. These multimodal inputs will help the system interpret nonverbal cues that indicate stress, confidence, or engagement.
To close the loop, we’re implementing Memory Persistence with ChromaDB and long-term vector embeddings. This will allow AgentViewAR to learn across multiple interviews, building recruiter-specific empathy models and providing feedback loops on improvement areas such as communication pace, tone modulation, and bias reduction.
Finally, every interview session will automatically generate a Recruiter Summary Report, a concise, data-driven brief delivered post-session. It will include:
Key discussion points:
Together, these features transform AgentViewAR from a single-session VR assistant into a continuous learning and empathy intelligence platform, helping recruiters become not just better evaluators, but better humans.
AgentViewAR shows that Fetch.ai agents can move beyond research and deliver meaningful, real-time impact in human-centered environments.
By combining empathy, data, and adaptive intelligence, we’re proving that interviews don’t have to be stressful; they can be insightful, balanced, and humane.
We’re continuing to refine and expand AgentViewAR, integrating new agentic layers for recruiter intelligence, multimodal emotion detection, and automated post-interview summaries.
Project link: https://lnkd.in/eiXwUe3Y
If this space excites you, we’d love to connect and explore what’s next:
Sukriti Sehgal: https://www.linkedin.com/in/sukritisehgal/
Ishneet Kaur Chadha: https://www.linkedin.com/in/ishneetkaurchadha/

AgentViewAR — Building Empathy Into Interviews with Fetch.ai Agents was originally published in Fetch.ai on Medium, where people are continuing the conversation by highlighting and responding to this story.
1
0
安全地关联您正在使用的投资组合,以开始交易。