Features deliberately deferred from V1 to keep the first ship narrow. Each section below was costed and judged "not load-bearing for the V1 mission." They land back in scope when the trigger condition listed at the top of each section fires.
See ARCHITECTURE.html for the V1 system in scope.
Playwright-driven Chromium per account, with per-account stable fingerprints and a dedicated mobile-carrier proxy. The full anti-correlation stack — only worth building when the Device + MCP combination genuinely runs out of headroom.
fingerprints, iproxy_connections, ip_rotations, sessions).fingerprints (id, account_id, spec_json, created_at, burned_at)
iproxy_connections (id, iproxy_external_id, api_key_vault_ref, carrier, country,
bound_account_id, state, current_ip, last_rotated_at,
unique_ip_window_days, created_at, retired_at)
-- state ∈ {free, bound, cooling_off, retired}
sessions (id, account_id, channel, started_at, ended_at, outcome)
ip_rotations (id, iproxy_connection_id, rotated_at, old_ip, new_ip, reason)
The accounts.channel CHECK constraint expands to include 'browser'.
We operate our own production-grade mobile proxy business through iproxy.online — a fleet of real Android devices, each producing a real mobile-carrier IP. Gold-tier proxy signal for social platforms.
A physical Android phone running the iproxy agent. Each phone = one "connection" = one mobile IP at a time (plus a rotation history). Carrier, country, and device metadata are queryable.
Each connection has its own API key. We pass Authorization: Bearer <connection_api_key> against https://iproxy.online/api/cn/v1/. Keys are stored in Supabase Vault, one row per iproxy_connections record.
Two modes: manual (POST .../command-push with changeip) and automatic (update-settings with ip_change_enabled + ip_change_interval_minutes).
ip_change_wait_unique tells iproxy to never reissue an IP that's been used for that connection within a configurable lookback. Combined with 1:1 account binding: "no IP duplication ever" per-account, no homegrown ledger.
Per connection: GET /api/cn/v1/ip-history, GET .../traffic/by-day, GET .../uptime. Feed the per-account proxy health panel.
1 browser account → 1 iproxy connection, for life. Same isolation philosophy as the channel rule. If a connection burns, the account is paused; if the phone hardware-fails, an operator re-binds (rare, audited).
class ProxyAgent:
async def provision(self, account_id: str) -> IproxyConnection:
"""At account enrollment (browser channel only). Reserves a free connection,
stores its API key in Vault, sets ip_change_wait_unique=True, returns binding."""
async def current_ip(self, account_id: str) -> str: ...
async def rotate(self, account_id: str, reason: str) -> str: ...
async def health(self, account_id: str) -> ConnectionHealth: ...
async def release(self, account_id: str) -> None:
"""On account retirement — returns connection to pool after cool-off + IP-history purge."""
iproxy_connections as the source of truth for fleet inventory + assignment state.
Per-account stable, realistic browser fingerprints. Mimics AdsPower / Multilogin behavior. Same fingerprint reused across every Browser-backend session for that account — never randomized per login.
FingerprintAgent owns the pool: mints stable per-account browser fingerprints, rotates only on confirmed burn signal (not on every login — that defeats the point).
Each adapter operation reports outcome: ok / soft-block / hard-block / shadowban-suspected. FingerprintAgent + ProxyAgent react:
Account signup is the highest-risk flow on every platform — signup detection is where anti-bot teams invest hardest. Until a customer specifically asks for "make me 5 new TikTok accounts," V1 treats accounts as imported, not created. V2 brings back AccountAgent as a signup driver and the 2FA-relay wiring it needs.
Creates and warms up accounts. Picks an execution backend, requests a fingerprint, drives signup flow, stores credentials, ramps activity over a warm-up curve.
When a login prompts for 2FA, the Telegram bot pings the operator (or a connected Dr.Emails IMAP-worker for email codes). The agent waits, the operator (or Dr.Emails) submits, the flow resumes.
Dr.Emails repo. Both are deferred until signup is in scope.
Real followers don't have a single handle — they exist as @aurora_fan_tt on TikTok, @aurorafan on Instagram, and Sarah K. on LinkedIn. Dr.Social V2 treats each per-platform handle as a separate audience_identity and probabilistically links them to one audience_member.
@sarah_k_designs_2026 on two platforms. Auto-merge if handle entropy is high.merge_status='operator_confirmed'.identity_link_signals; a sum-of-weights threshold triggers an auto-merge candidate shown to the operator for confirmation rather than auto-merging silently.merge_status to operator_confirmed, locks it.operator_split, system never re-merges.events for audit.audience_members (id, tenant_id, display_name_guess, avatar_hash, notes,
merge_status CHECK in ('auto','operator_confirmed','operator_split'),
created_at, last_seen_at)
audience_identities (id, tenant_id, audience_member_id, platform, external_handle,
external_user_id, display_name, avatar_url, bio,
confidence float, first_seen_at, last_seen_at,
UNIQUE (tenant_id, platform, external_user_id))
audience_interactions (id, tenant_id, audience_identity_id, account_id,
kind in ('dm','comment','reply','mention','follow','like','share'),
ref_id, ts, payload_json)
identity_link_signals (id, tenant_id, audience_identity_id, signal_kind, signal_value,
weight float, observed_at)
threads (id, tenant_id, account_id, audience_identity_id, last_message_at)
-- V1 threads reference accounts directly; V2 adds audience_identity_id
audience_members are tenant-scoped. If two tenants both have @sarah_k as a follower, those are two independent records. We never cross-link audience data between tenants.
V1 already ships an email post-request path — see V1 §10. It's wired through Cloudflare Email Routing catch-all on dr-social.app → one operator Gmail inbox → IMAP poll every 30s. This is enough for MVP volumes and adds zero email-vendor cost.
V2 graduates that path to dedicated inbound infrastructure when one of these triggers fires:
Gmail IMAP supports ~15 simultaneous connections and ~2,500 requests/day per account at the free tier. If the operator's mailbox starts queueing or losing messages, swap the catch-all forward target to AWS SES (or Postmark Inbound) and write a Supabase Edge Function that POSTs each inbound message straight into the API.
V1 trusts the talent's slug — anyone who knows [email protected] can submit. Add a talent_request_senders allowlist (one row per verified email per talent) plus a verification flow in /me when a talent's address starts attracting noise.
talent_request_senders (id, talent_id, email, verified_at, last_used_at)
-- only emails from this allowlist may submit to the intake address
-- The post_requests.source enum doesn't change (already accepts 'email' in V1);
-- only the *ingestion path* moves from Gmail-IMAP to SES/Postmark webhook.
dr-social.app → operator Gmail → intake/gmail_imap.py poller → post_requests row. Dedup is via gmail_message_id UNIQUE. See V1 §10 for the full flow.
V1 has a single tenants.default_tick_interval_seconds that drives every talent in the tenant. V2 adds per-talent override and dynamic adjustment.
-- Adds to talents table:
talents.tick_interval_seconds INT NULL -- NULL = use tenant default
talents.last_ticked_at TIMESTAMPTZ NULL
-- Worker side: resolve the per-talent effective cadence
select p.id,
coalesce(p.tick_interval_seconds,
t.default_tick_interval_seconds,
:system_default_tick_seconds) as effective_tick_seconds,
p.last_ticked_at
from talents p
join tenants t on t.id = p.tenant_id
where p.status = 'active'
and (p.last_ticked_at is null
or now() - p.last_ticked_at >= make_interval(secs => …));
PersonaAgent can adjust tick_interval_seconds dynamically (warm-up phase, blackout windows, burn-signal back-off). The "Tick interval" field in /me.html Preferences becomes editable.
V1 dashboard polls every 5s via htmx hx-trigger="every 5s". V2 swaps to Supabase Realtime subscriptions on events, posts, messages, and content_queue.
V1 dashboard is server-rendered HTML with htmx for partials. Simple to develop, simple to deploy, no Node toolchain. V2 introduces React + Vite when richness demands it — but the URL space stays identical so adopting is incremental.
src/drsocial/static//dashboard/ + the talent portal at /me.V1 has 5 operator pages: Overview, Talents, Accounts, Queue, Inbox, Settings. The pages below land back when their underlying feature does:
| Page | Maps to V2 section | What it shows |
|---|---|---|
| Audience | §6 | Resolved audience_members with cross-platform identity links, interaction history, merge-suggestion queue, manual merge/split controls. |
| Fingerprints | §3 | Per-account fingerprint registry, burn history, manual quarantine. |
| iproxy Fleet | §2 | Connection inventory, current IP, rotation timeline, carrier mix, traffic, uptime per phone. |
| Devices | §1 (channel matrix) | Device pool inventory, OS/serial, account binding, idle/busy/quarantined state. |
| MCP Grants | §1 (channel matrix) | OAuth grants per account, scopes, last refresh, manual re-auth. |
| Agents | (debugging) | Live agent status, A2A message bus tail, last invocations per agent. |
| Jobs | (debugging) | Job queue inspector, failure traces, requeue. |
| Audit Log | (compliance) | Full event stream with search/filter. V1 ships events table; this is the UI on it. |
| Moderation | (standalone) | V1 shows moderation flags inline in Queue; V2 splits to a dedicated page when flag volume justifies it. |
| Analytics | (standalone) | V1 folds an analytics rollup into Overview; V2 splits to a dedicated page when the rollup outgrows a panel. |
V1 runs api + worker as asyncio tasks inside one Railway service — cheaper, simpler ops, one restart loop. When that becomes a contention bottleneck, split into two services with separate Dockerfiles. Same code, just a different entrypoint per service.
-- V2 deployment shape
service: api Dockerfile.api $PORT public runs FastAPI + serves dashboard
service: worker Dockerfile.worker no public port runs the asyncio job loop
scale independently