70

LLM Engineer – Privacy-Preserving Access Architecture (AWS/Vercel/OpenAI)

UpworkAUNot specifiedintermediate
Artificial IntelligencePythonFastAPIDjangoFlaskReactNext.jsAI DevelopmentMachine LearningSaaS DevelopmentTensorFlowAutomationAI Agent DevelopmentRetrieval Augmented GenerationMLOps
I developed a chatbot prototype for a research study in which participants will need to receive an access code to use the bot. So I can identify the conversational history from those who used the research code. The bot uses Vercel (frontend), AWS (backend), and OpenAI. Currently, access is provided by name, email, and a unique code sent to their email. To create a de-identified strategy and comply with the university’s cybersecurity requirements, I wonder if you could do something like this to reduce the need for personal information for access: What we need to achieve: The chatbot needs to know the participant is from your study (e.g., any code) The chatbot cannot rely on identifiable information (e.g, email, name) It will reduce (re)identification risk Keep governance happy Option: Pre-generated Study Tokens (Best for Privacy) Before the study: I generate 100 random codes: STUDY-AIDBACK-001 STUDY-AIDBACK-002 STUDY-AIDBACK-003 I will send each participant their code after consent. When they log in: They enter the code. System generates internal UUID: Universally Unique Identifier (e.g., a8f3d91c-239x-991a-72bc) No email required. Chat data is stored only against: participant_id study_code There is no identity in the system. If breached, nobody can identify the participant. This is much closer to true de-identification.
View Original Listing
Unlock AI intelligence, score breakdowns, and real-time alerts
Upgrade to Pro — $29.99/mo