Deutsch한국어日本語中文EspañolFrançaisՀայերենNederlandsРусскийItalianoPortuguêsTürkçePortfolio TrackerSwapCryptocurrenciesPricingIntegrationsNewsEarnBlogNFTWidgetsDeFi Portfolio TrackerOpen API24h ReportPress KitAPI Docs

AgentViewAR — Building Empathy Into Interviews with Fetch.ai Agents

1M ago
bullish:

1

bearish:

0

Share

AgentViewAR — Building Empathy Into Interviews with Fetch.ai Agents

By: Sukriti Sehgal and Ishneet Chadha

The Problem: Stress Hides Potential

We’ve all seen it: a talented candidate walks into an interview, only to stumble under pressure. Their minds go blank, their voices tremble, and their confidence fades. Recruiters see hesitation, not potential.

At Cal Hacks 12.0, we set out to solve this. What if technology could help interviewers recognize stress cues in real time and respond with empathy rather than judgment?

That question became AgentViewAR (also known as InterViewAR), an empathy-driven VR interview coaching system that helps recruiters detect and respond to candidate stress signals in real time.

It’s built to create a fair, supportive environment where every candidate gets an equal opportunity to showcase their true abilities, not just their ability to handle stress. By combining empathy with AI, we aim to make hiring more human, inclusive, and emotionally intelligent.

The Vision: Empathy Meets AI

AgentViewAR transforms the interview experience through real-time multimodal analysis.
Inside a Meta Quest 3 VR headset, recruiters see live visual feedback, including:

  • Pace (words per minute)
  • Pauses and speech rhythm
  • Tension level
  • Empathetic suggestions — subtle prompts to create a calmer, stress-free environment.
  • Next-question recommendations — context-aware prompts generated from the ongoing conversation.

Each signal is processed and displayed on a floating WebXR HUD, guiding recruiters to slow their tone, offer reassurance, or rephrase questions when stress levels rise. The result is a balanced and emotionally aware interview setting, where every candidate gets a fair, supportive space to showcase their real potential.

The Tech Behind It

Built over 36 hours during our first-ever hackathon, AgentViewAR combines agentic orchestration, real-time speech analytics, and a modern web stack to bring empathy into the loop.

Tech Stack:

  • Fetch.ai Agentverse — autonomous agents handle data coordination between recruiter and candidate nodes, managing tension scoring and feedback delivery.
  • Groq Whisper — performs ultra-fast real-time transcription and tone analysis at microsecond speeds.
  • FastAPI + ChromaDB — backend pipeline and memory layer for tracking emotional recovery rates and question histories.
  • Meta Quest 3 WebXR + Three.js — 3D heads-up display showing real-time stress metrics and coaching cues.
  • Python & WebSockets — connect every layer for seamless live streaming and interactivity.
  • Claude (Anthropic) API — Generates AI summaries of each interview (key moments, cues given, recovery metrics, next-step coaching).

Architecture at a Glance

AgentViewAR runs on a distributed, multi-agent pipeline built for real-time emotion analysis, adaptive coaching, and post-interview intelligence.

  1. Capture & Stream → Audio and emotional cues are captured through the Meta Quest 3 (WebXR) headset and streamed via WebSockets to the FastAPI backend.
  2. Agent Coordination (Fetch.ai Agentverse) → Two autonomous Fetch.ai agents represent the recruiter and candidate. They coordinate speech events, compute tension and tone scores, and trigger contextual empathy feedback prompts in real time.
  3. Real-Time Analytics → The Groq Whisper model performs ultra-fast transcription and tone detection at sub-millisecond latency, feeding continuous results back to both agents for live coaching cues and dynamic next-question suggestions.
  4. Data Storage & Memory Layer → All transcripts, emotion embeddings, and recovery metrics are stored in ChromaDB, providing persistent memory for trend analysis and cross-session learning.
  5. Summarization & Insight Generation → After each session, Claude (Anthropic) processes stored transcripts and metrics to generate structured summaries containing:
  • Key discussion points
  • Stress and recovery patterns
  • Coaching recommendations for recruiters

6. Visualization & Feedback →

  • A VR HUD (WebXR + Three.js) displays live pace, tone, and tension metrics during interviews.
  • An interactive dashboard allows recruiters to review sessions, analyze real-time analytics, and access AI-generated summaries and reports directly from the web.

Together, these layers form a fully agentic feedback loop, where Fetch.ai agents orchestrate empathy in real time, ChromaDB maintains institutional memory, and Claude API transforms data into actionable insight for more human-centered interviews.

Why Fetch.ai

Fetch.ai’s Agentverse formed the core of our architecture, powering multi-agent orchestration that models both the recruiter and candidate as autonomous, context-aware agents sharing a dynamic emotional state.

Each agent continuously maintains:

  • Emotional embeddings: capturing tone, hesitation, and sentiment in real time
  • Live communication signals: enabling responsive, two-way interaction
  • A shared empathy model: translating emotional data into meaningful, human-centric feedback

This setup transforms traditional analysis into an adaptive, agentic dialogue, where feedback isn’t static data on a screen but context-aware coaching cues that evolve naturally as the conversation unfolds. In short, Fetch.ai turns empathy into a live system behavior, not an afterthought.

The Experience

After two sleepless nights, countless coffee refills, and some persistent Wi-Fi hiccups, our team watched the system come to life.

When tension spikes were detected, AgentViewAR suggested prompts like:

“Slow your tone.”
“Offer reassurance.”
“Pause before your next question.”

Watching those cues calm a nervous candidate in real time was our “aha” moment, proof that empathy and AI can work hand in hand.

The Recognition

Our project was honored as the Winner for Most Viral Personality (ASI: One Track) and the Fetch.ai Sponsor Track Winner at Cal Hacks 12.0, the world’s largest collegiate hackathon.

But the real win was seeing empathy quantified, visualized, and applied through autonomous agents.

What’s Next

We’re evolving AgentViewAR into a complete agentic interview ecosystem that blends empathy, personalization, and data intelligence.

Our next step introduces a new Resume Intelligence Agent, designed to automatically parse a candidate’s resume and identify core technical skills, tools, and expertise, such as Python, SQL, TensorFlow, or AWS. These keywords will appear on the right side of the VR interface, displayed alongside existing real-time metrics.

By integrating this agent within the Fetch.ai Agentverse, the system will dynamically generate context-aware question prompts tailored to the candidate’s strengths, allowing recruiters to personalize the interview flow in real-time.

We’re also developing a Physiological and Gesture Recognition Module, using OpenCV, MediaPipe, and pose estimation algorithms to analyze eye movements, facial micro-expressions, and hand gestures. These multimodal inputs will help the system interpret nonverbal cues that indicate stress, confidence, or engagement.

To close the loop, we’re implementing Memory Persistence with ChromaDB and long-term vector embeddings. This will allow AgentViewAR to learn across multiple interviews, building recruiter-specific empathy models and providing feedback loops on improvement areas such as communication pace, tone modulation, and bias reduction.

Finally, every interview session will automatically generate a Recruiter Summary Report, a concise, data-driven brief delivered post-session. It will include:

Key discussion points:

  1. Candidate emotion and engagement metrics
  2. Recovery curves (how quickly tension decreased after each spike)
  3. Suggestions for future interviewer improvement

Together, these features transform AgentViewAR from a single-session VR assistant into a continuous learning and empathy intelligence platform, helping recruiters become not just better evaluators, but better humans.

Beyond the Prototype

AgentViewAR shows that Fetch.ai agents can move beyond research and deliver meaningful, real-time impact in human-centered environments.
By combining empathy, data, and adaptive intelligence, we’re proving that interviews don’t have to be stressful; they can be insightful, balanced, and humane.

We’re continuing to refine and expand AgentViewAR, integrating new agentic layers for recruiter intelligence, multimodal emotion detection, and automated post-interview summaries.

Project link: https://lnkd.in/eiXwUe3Y

If this space excites you, we’d love to connect and explore what’s next:
Sukriti Sehgal: https://www.linkedin.com/in/sukritisehgal/
Ishneet Kaur Chadha: https://www.linkedin.com/in/ishneetkaurchadha/


AgentViewAR — Building Empathy Into Interviews with Fetch.ai Agents was originally published in Fetch.ai on Medium, where people are continuing the conversation by highlighting and responding to this story.

1M ago
bullish:

1

bearish:

0

Share
Manage all your crypto, NFT and DeFi from one place

Securely connect the portfolio you’re using to start.