Why this repo still matters
The maker scene around AI companions keeps one reference repository bookmarked. GirlfriendGPT started in 2023 as a Steamship demo. It showed how to string together an LLM, image generation, and Telegram delivery. Forks spread quickly across GitHub. The project became shorthand for "boilerplate AI girlfriend, open source, build your own."
This review reads the codebase the way a maker would. Map what runs where. Note which choices aged well. Mark what a builder should replace today.
What the project actually is
A GitHub repository, Python stack, MIT licence. Maintained by Enias Cailliau and the Steamship team through 2024. The repo ships with a demo character, prompt scaffolding, image calls, and a Telegram bot wrapper.
It is not a hosted product for consumer readers. A maker clones the repo, edits the config, and deploys. Call it a starter kit, not a finished app.
Architecture walk-through
Five layers sit inside the repo. Each one deserves a look.
Layer one is the agent runtime. The original build leans on LangChain 0.0.x, with Steamship as the orchestration substrate. Toolchain wiring lives inside a Personality class that holds prompt fragments and few-shot examples.
The second layer handles model calls. OpenAI chat completions power the base pipe. Temperature and top-p come from a config file. To swap in Anthropic or a local vLLM instance takes maybe thirty lines.
Image generation takes the third slot. Stable Diffusion through a Replicate endpoint was the original default. Later forks added DALL-E 3 and Midjourney bindings through a webhook queue.
Voice sits on the fourth tier. ElevenLabs handles text-to-speech on opt-in. Audio replies return as Telegram voice notes. Whisper can be bolted on for inbound voice.
Delivery makes up the last layer. Telegram Bot API ships by default. A REST wrapper exists for custom frontends. Discord and Slack bindings live in community forks.
Timeline from launch to today
Launch in spring 2023 drew 2,400 GitHub stars inside a week. Summer 2023 pushed the count past 10,000 stars. Interest peaked late 2023 with Steamship demos and a HackerNews front-page run.
Steamship shut its hosted service in mid-2024. The repo stayed active through fork activity. Today the base reads as a teaching artifact. Most production builds pick a modern stack: LangGraph or raw OpenAI SDK, plus a Next.js frontend.
Still, the project remains the shortest path from zero to working companion bot.
Four design decisions worth studying
Decision, Personality class over free prompts
Personality holds the prompt recipe plus few-shot turns. Easy to test, easy to version across characters.
Insight, few-shot baked into the repo
Example turns cut cold-start costs. A new character inherits tone from the template instantly.
Move, Telegram as default frontend
Mobile reach without native app work. No app store fight, no store-review wait.
Trick, image generation as a tool call
The agent decides when to draw. That beats hardcoded triggers on every turn.
How to actually try it
Clone the repo, run poetry install, copy the example .env. Paste an OpenAI key, a Replicate key, and a Telegram bot token. Run the CLI deploy command, receive a Telegram handle, message it. First response lands inside five seconds on a warm deploy.
A Steamship account once sped setup, before the sunset. Self-hosting on Modal, Fly, or plain Docker works with light config edits. Two hours on a Saturday is enough for a working demo.
Pros and cons
Pros
- Open source with an MIT licence
- Five layers that map cleanly to a companion stack
- Personality pattern scales past a single character
- Telegram delivery reaches readers without an app build
- Documented enough for a weekend project
Cons
- Dependencies feel 2023, upgrade work expected
- Steamship hosted path went away in 2024
- Reader-facing polish stops at Telegram chat
- No built-in moderation layer
- Image call quality tied to an older Replicate endpoint
Feature list
- LLM chat: OpenAI GPT-3.5 and GPT-4 via OpenAI SDK, swappable
- Image generation: Replicate Stable Diffusion, pluggable endpoints
- Voice replies: ElevenLabs via tool call, opt-in per chat
- Delivery: Telegram bot by default, REST wrapper, Discord fork
- Personality: Python class with prompt plus few-shot turns
- Memory: short window with config-held lore facts
- Languages: prompt-driven, any model-supported tongue
- Platforms: any host that runs Python, plus Steamship heritage
- Export: git clone gives full repo and chat logs
Cost breakdown
| Tier | Cost | What it covers |
|---|---|---|
| Source clone | $0 | MIT licence, full repo, modify and redistribute freely |
| Model calls | Passthrough | OpenAI, Replicate, ElevenLabs at provider rates |
| Steamship deploy | Sunset path | Hosted runtime closed to new accounts during 2024 |
| Self-hosting | From $6 monthly | Fly.io, Modal, DigitalOcean, bare Docker images |
Licence cost is zero. Model fees scale with traffic. A weekend test runs under $10 in API spend.
Scorecard
| Criterion | Score | Notes |
|---|---|---|
| Code quality | 4.2/5 | Readable Python, thin abstractions |
| Architecture clarity | 4.4/5 | Five layers, clear seams |
| Deployment ease | 3.7/5 | Harder since Steamship sunset |
| Docs and onboarding | 3.8/5 | README strong, deep docs thin |
| Maintenance health | 3.2/5 | Original owners moved on, forks carry the torch |
| Licence openness | 4.8/5 | MIT, fully redistributable with attribution |
Who this fits
Best for Python makers who want a reference implementation of an AI companion stack.
Consumer readers wanting a ready app should pick Replika or Candy AI. Casual app-shoppers should skip this page entirely.
Workflow tips for current builders
Upgrade LangChain to the latest minor before deploying fresh. Replace Replicate with a current image provider, like Ideogram or Flux-Pro. Add a moderation layer as the first extension, not a last patch.
Keep the Personality pattern intact. Evolve the model router instead, since modern APIs ship better streaming and function calls.
Safety and data notes
Data handling depends on the maker, not a hosted vendor. OpenAI retention rules apply to any model calls made through their API. ElevenLabs caches audio inputs per their default policy unless the maker disables retention.
Each builder should check current terms of service for every downstream provider. Wrong defaults can send chat content into training sets by accident.
Frequently asked questions
Is GirlfriendGPT free?
Yes, the repository ships under MIT licence. Model provider costs apply separately for OpenAI, Replicate, ElevenLabs.
Do model calls cost money?
Paid OpenAI usage is standard. Free trial tiers exist on most providers for light testing.
Does the project have a mobile app?
No native mobile app. The default frontend is Telegram, which runs on every phone.
Is there a hosted version to try without coding?
Steamship's hosted demo sunset in 2024. Community forks sometimes expose demo bots on Telegram.
Can I modify and resell a build?
Yes under MIT licence. Attribution to the original authors stays a requirement.
What language is the code in?
Python 3.10 and above, with Poetry for dependency management.
Is the project still maintained?
Original maintainers moved on. Community forks carry the torch, with steady commit activity through 2025.
Our verdict
GirlfriendGPT earns a solid 4.0/5 and the #25 slot on this year's index. As a codebase it teaches beyond what most hosted products show. For any builder shipping an AI companion product this year, the repo is still worth an evening of study. Start with the Personality class, replace the model and image providers, ship something real.
Disclaimer: external links may be affiliate links. See our methodology.
You may also consider
Replika #11
Veteran AI friend app with deep memory and a polished mobile build.
SoulGen AI #22
Image-first companion studio with strong generation tooling.
Anima AI #15
Quiz-to-archetype SFW companion with a one-off lifetime fee option.