Deutsch한국어日本語中文EspañolFrançaisՀայերենNederlandsРусскийItalianoPortuguêsTürkçePortfolio TrackerSwapCryptocurrenciesPricingIntegrationsNewsEarnBlogNFTWidgetsDeFi Portfolio TrackerOpen API24h ReportPress KitAPI Docs

What I Built at Fetch.ai: A Multi‑Agent Platform to Understand Vaccine Hesitancy

3d ago
bullish:

0

bearish:

0

Share

Demographics, social listening, and voice signals — stitched together for health‑affairs teams

Estimated read: 7–9 minutes · Stack: Next.js, uAgents, ASI‑1, Supabase/Postgres, Qdrant, Docker

The spark

I built this during my Fetch.ai internship (You can explore the live demo here: vaccine-hesitancy.fetch.ai). The brief was open‑ended: could we use Fetch.ai’s agent tech to make public‑health data more useful? Vaccine hesitancy felt like the perfect proving ground. Official surveys tell us where and who. Social media shows what people are saying right now. Frontline conversations reveal why. The platform below pulls those threads together so health‑affairs teams can go from a map to a message in a single session.

The big picture

At a high level, the UI is a Next.js dashboard with four tabs. Each tab talks directly to a specialized agent via REST. There’s no central orchestrator — the frontend routes to the right service based on what the user is doing. Agents are packaged as Docker services with health checks, and data lives in Postgres (Supabase) and Qdrant.

Figure A. Architecture Diagram

Why agents?

Because they let each capability evolve independently. In practice that meant faster iteration during the internship and clearer fault boundaries. I used Fetch.ai uAgents for the agent scaffolding and ASI‑1 as the LLM to interpret queries and write concise, cited answers.

A quick tour (with screenshots)

1) Health Board — where the map meets the data

The Health Board is an interactive UK choropleth built with Mapbox and Recharts on top of ONS cross‑tabs. Filters switch between demographic dimensions (age, ethnicity, religion, disability, employment, IMD quintiles, and more) and view modes. The goal is speed: find pockets of high hesitancy, then pivot into the why.

Choropleth map of the UK showing vaccine hesitancy by region with filters for demographics and view type.

2) Ask the AI — analyst in the sidebar

Type a question like “Compare Black British vs Asian British rates in England and explain why they differ.” The Insights Agent pulls exactly the right slice from Postgres and ASI‑1 writes a short, source‑aware explanation. A small Data Summary panel shows raw vs. filtered rows and flags any data quirks, so the narrative never drifts away from the numbers.

Chat assistant shows a comparison summary with a data summary bar highlighting current selection and notes.

3) Social Media Analysis — the public conversation

The X Analysis Agent ingests tweets via a RapidAPI provider and aggregates sentiment, impact, and recency. You’ll see overall share of positive/negative/neutral posts, a 30‑day volume, a distribution donut, and a ranked list of high‑impact posts. It’s not meant to replace official stats — just to anchor messaging in what people are actually discussing today.

Cards show sentiment percentages and total posts, with a donut chart and a list of top‑impact tweets.

4) Vaccine Resources — answers with citations

This is a semantic library of WHO/CDC/UKHSA/NHS/UK Parliament content. Documents are chunked and embedded; Qdrant powers retrieval; ASI‑1 returns a short answer with linked passages. When someone asks for a talking point, the evidence is one click away.

Document cards grouped by Clinical Guidelines, Safety & Efficacy, Policy & Regulation, and Public Education.

5) Voice Analysis — from audio to themes

Upload an MP3/WAV/M4A. The Voice Analyzer Agent transcribes the clip and prepares it for classification.

Audio upload panel with processing‑time notice and supported formats.

6) Voice Analysis — what people actually said

The transcript is scored across categories like safety concerns, distrust of authorities, freedom concerns, natural immunity, religious objections, and misinformation. You also get an overall hesitancy score and keyword chips to skim the conversation.

Panel with per‑category levels (high/moderate/low/none), excerpts, and extracted keyword chips.

Under the hood

How the agents cooperate

There’s no heavyweight orchestration layer. The frontend routes requests directly: Health Board → Insights Agent, Social → X Analysis Agent, Resources → Resource Agent, Voice → Voice Analyzer. It kept the surface area small and the failure modes clear while I iterated.

What ASI‑1 actually does

Two jobs: it interprets natural‑language questions (“compare… explain why…”) and it writes the final, human‑readable answer once the agent has fetched the right data. The same pattern powers voice‑theme classification and resource answers.

Data pipelines

  • ONS → Postgres. I wrote targeted loaders (t1a, t1b, etc.) that normalize the official Excel sheets into analysis‑ready tables. It’s a manual ETL pass today — good enough for a prototype, easy to automate later.
  • X/Twitter. Ingestion runs through a RapidAPI provider rather than direct OAuth. It keeps the demo simple and avoids wrestling with rotating credentials.
  • Resources. The library is chunked and embedded for Qdrant; the agent expands medical terminology and returns cited passages.
  • Voice. Standard speech‑to‑text first, then ASI‑1 for category scoring and keyword extraction.

Deployment

Everything is Dockerized with health checks and runs on localhost in Compose for now. During the internship that balance — real services, local ops — let me focus on product decisions without cloud overhead.

Who it’s for

The platform is aimed at health‑affairs teams who need both the statistical picture and the lived conversation. In one sitting you can locate a hotspot, see what people are saying about it, and pull a cited explanation to brief stakeholders.

What I learned

Small, well‑typed contracts between the UI and each agent beat a single, do‑everything service. Letting agents own their data sources (Postgres, Qdrant, RapidAPI, ASR) simplified debugging. And keeping the LLM’s job narrow — interpret + summarize, not “be the database” — made answers consistent.

Limitations & guardrails

Survey nonresponse, platform sampling, and ASR errors can skew results. The current Twitter feed is 3rd‑party and may change. Audio is uploaded by the user; no PII is collected. Resource answers include sources by design, and the social module follows platform terms.

What’s next

Automate the ONS ETL and add time‑series cohorting. Expand beyond X to additional platforms. Add lightweight playbooks that jump from a detected concern to a recommended message plus citations. When the agents are in the cloud, enable scheduled alerts for regional spikes.

Credits

Built with Fetch.ai uAgents and ASI‑1 during my internship. Thanks to the health‑affairs folks who pressure‑tested early versions and to the open data that makes this possible.


What I Built at Fetch.ai: A Multi‑Agent Platform to Understand Vaccine Hesitancy was originally published in Fetch.ai on Medium, where people are continuing the conversation by highlighting and responding to this story.

3d ago
bullish:

0

bearish:

0

Share
Manage all your crypto, NFT and DeFi from one place

Securely connect the portfolio you’re using to start.