FinGPT

As a Product Designer, I collaborated with a stealth fintech startup to design a framework for designing LLM-powered learning experiences in personal finance, where users remain in control, trust is built transparently, and AI is used to clarify, not convince.

The context

Young adults are eager to invest, but few feel equipped to start. In partnership with a stealth fintech startup, I led the design of a framework for using Large Language Models (LLMs) as trusted copilots for personal finance.

This project resulted in a scalable, low-risk, and explainable framework that banks and fintechs could adopt to enhance user learning, not decision-making.

Looking for LLM use cases in personal finance

We set out to answer a foundational question: Where can LLMs responsibly add value in personal finance?

Where can LLMs responsibly add value in personal finance?

Use cases with the following selection criteria:

Use cases with the following elimination criteria:

Identified use case: helping young professionals learn about personal finance using LLMs as an exploration tool

Overview

Role

Product Design Consultant

Impact

This framework was adopted for continued internal development by the client team

Duration

2 months

Team

Product, Engineering

Scope

0 → 1

LLM UX

Product Strategy

Responsible AI

Patterns in how people learn about personal finance

A typical user journey

Insights

Undefined goals make it hard to begin

The goal is often not specific and measurable, which makes it hard to begin

Undefined goals make it hard to begin

The goal is often not specific and measurable, which makes it hard to begin

People consult many sources of information

People use ~6 sources, mostly YouTube, Reddit, blogs, and now ChatGPT but can’t judge relevance or trustworthiness

People consult many sources of information

People use ~6 sources, mostly YouTube, Reddit, blogs, and now ChatGPT but can’t judge relevance or trustworthiness

LLM as a shortcut to search

People use LLM as a shortcut to extensive search and mostly double check answers online

LLM as a shortcut to search

People use LLM as a shortcut to extensive search and mostly double check answers online

People want clarity, not control. They’re not asking AI to decide, they want it to help them understand options faster and frame their thinking.

Designing a framework that allows structured exploration

These insights showed that users aren’t asking for decisions, they want structured exploration.
That informed a system where LLMs act as copilots, not advisors, leading to the framework below.

Structured prompts help users ask better questions

The quality of the answer depends upon the structure of the prompt. 

  • Intent + context (e.g., “I’m a beginner with $500/month to invest”)

  • Parameters (time horizon, risk tolerance)

  • Trusted financial source filters

  • System prompt should include parsing through the user's financial data and web search trusted sources for the latest developments and market situations, with consent

Structured, explainable answers builds trust

Building credibility and showing process:

  • The reasoning behind the output

  • Source data and confidence level

  • Suggested follow-up prompts for deeper learning

Re-prompts stemming from guardrails

Avoids misuse, wrong information and regulatory risks by:

  • Flagging ambiguity and asks for clarification

  • Recommending alternative questions in case the answer yields a high perplexity score

  • Avoiding advice on securities, taxes, or personal recommendations

Human oversight

For high-stakes situations that exceed a certain threshold, human advisors should be involved in making the final decision after initial brainstorming using the LLM.

Summaries and analytics for each customer are presented to the human financial advisor for greater personalization and relationship enhancement.

Minimizing hallucination and deployment cost

Agentic AI with LLM, RAG, other tools

Limiting the use of LLMs to where they perform best:

  • Summarizing complex information

  • Parsing patterns across diverse inputs

  • Holding conversational memory

  • Multi-turn conversations for natural language UI

Building context for personalization

One of the core frictions in AI-based financial tools is that users must explain their situation repeatedly: risk profile, income, goals, etc. But that effort creates drop-off and doubt.

To solve this, I designed for permission-based context sharing: a model that allows the assistant to automatically understand key aspects of a user’s financial life, only when the user explicitly consents.

Banks: Connect trusted, regulated data with consent

Banks already hold a wealth of structured financial information: spending history, income patterns, savings behavior, and demographic data.

With user permission, this data can be leveraged to:

  • Auto-populate financial context (e.g., income range, recent transactions)

  • Surface relevant questions or planning scenarios

  • Offer examples grounded in the user’s actual financial footprint

However, while people trust banks to store and secure money, they’re often skeptical of advice, especially when it may be linked to selling financial products.

Fintech Platforms: Context Through Connected APIs

Fintech products already offer integrations that make personalization seamless. Using APIs like Plaid, Google Calendar, or device location, the assistant could:

  • Pull account balances and categorize spending trends

  • Connect with life events (e.g., a move, job change) for timely advice

  • Tailor responses based on behavior

This shifts the mental load from “Tell me your situation” to “Confirm this reflects your situation.”

Interfaces that demonstrate the framework

An LLM-powered Financial Guide in the banking ecosystem to help retail investors explore personalized investment strategies

Fig. Guided prompts and permissions for better answers

Fig. Clarifications and guardrails

Fig. Structured answers

The Outcomes

The validated product framework aligned with user behaviors and technical constraints, was adopted for continued internal development by the client team.

Next steps
  • Prototype a conversational design flow

  • Develop high-fidelity prototypes for the interface

  • Run usability tests

Let’s build something that matters

Debeshi Ghosh

Product Designer

San Francisco

Navigation

Home

Projects

About

Resume

Contact

Projects

FinGPT

DS Markets

Iot Privacy

Fintitude

Skills

Product strategy

End-to-end design

UX Design

Go-to-market

Let’s build something that matters

Debeshi Ghosh

Product Designer

San Francisco

Navigation

Home

Projects

About

Resume

Contact

Projects

FinGPT

DS Markets

Iot Privacy

Fintitude

Skills

Product strategy

End-to-end design

UX Design

Go-to-market