This application is part of a research study exploring how users engage in interviews about their social connections—either with a chatbot or with a human peer. The study "network." is conducted by Marco Galle from PH Luzern with technical support by smartive AG and financed by Swiss National Science Foundation.
The key user flow includes:
- Receiving an email link containing an access key to determine user group (chatbot interview, human interview, or test group).
- Viewing an introductory video and consenting (or declining) to participate—data is only recorded upon consent.
- Completing an initial questionnaire.
- Viewing a tutorial video and building a personal social network map.
- Depending on the assigned group:
- Human Interview: The user participates in an audio-recorded discussion with a peer.
- Chatbot Interview: The user has a ~10-minute conversation with an AI chatbot about their social network.
- Filling out a final questionnaire reflecting on their experience.
- Optionally downloading their network map as an pdf.
All gathered data (consent, questionnaire responses, network maps, interview recordings/transcripts) is stored securely in a PostgreSQL database or Google Cloud Storage.
- React 19
- Next.js 15 (using the App Router)
- TypeScript
- Tailwind CSS
- Shadcn/UI and Radix UI for components
- Vercel AI SDK (Azure OpenAI model integration)
- PostgreSQL
- Uses Next.js App Router for structured pages and route handlers.
- Includes server-side and client-side components for optimized performance and fast rendering.
- Implements AI-driven chatbot interviews via the Vercel AI SDK, with streaming responses and context management.
- Stores conversation transcripts, questionnaire data, and network mapping data in PostgreSQL.
- Provides an audio recording feature for the human interview flow with file storage in Google Cloud Storage.
- Offers a final download option for the user's network map.
- Enforces minimal back-navigation to ensure data integrity.
The application uses a multi-model AI system to handle user interactions during the interview process. The diagram below illustrates how different AI components work together to process user input:
sequenceDiagram
participant User
participant API as Chat API Route
participant Moderator as Moderator AI
participant Interviewer as Interviewer AI
participant Summarizer as Summarizer AI
participant DB as Database
User->>API: Sends message
API->>DB: Fetches network map
API->>Moderator: Validates user message
Moderator-->>API: Returns validation result
alt Message count ≥ threshold
API->>Summarizer: Summarizes conversation
Summarizer-->>API: Returns conversation summary
end
API->>Interviewer: Sends prompt with:
Note right of Interviewer: - System prompt<br>- Network map<br>- Validation result (if any)<br>- Conversation summary (if any)<br>- Recent messages
Interviewer-->>API: Streams response
API-->>User: Streams response to user
API->>DB: Saves conversation
alt Interview time elapsed or completed
Interviewer->>Interviewer: Calls finishInterview tool
Interviewer-->>API: Returns closing message
API->>DB: Updates user status
end
This orchestration enables:
- Content moderation, safety and preventing of topic drift through the Moderator AI
- Contextual awareness through the Summarizer AI (which preserves context while managing token limits)
- Structured and natural conversation flow through the Interviewer AI
- Node.js (20+ recommended)
- Clone the repository.
- Install dependencies:
npm ci
- Create a
.envfile with the necessary environment variables (see.env.examplefor reference). - Run the development server:
npm run dev
- Open http://localhost:3000 in your browser to view the application.
A script is available to export all collected user data from the database and Google Cloud Storage into a local directory structure.
- Ensure Environment Variables are Set: The script requires the same database connection (
INSTANCE_CONNECTION_NAME,POSTGRES_*) and Google Cloud Storage (GCP_*) credentials in your.envfile as the main application. - Run the Export Script:
npm run export:data
- Output: This command will create an
exportsdirectory in the project root. Insideexports, you will find a sub-directory for each user (named with theiruser_id), containing:network_map_and_questionnaires.xlsx: Data from the network map and the two questionnaires.chat_transcript.txt: Full transcript for users in the 'chatbot' group.interview_audio.webm: Downloaded audio recording for users in the 'human' group (if available).
This project is licensed under the MIT License - see the LICENSE file for details.
This research project is funded by public resources and is intended to serve as a public good. The MIT License was chosen to maximize accessibility and reuse while maintaining basic attribution requirements.