Building Offline-First Flutter Apps with OfflineQueue and Supabase
BrainFit is a brain training RPG where players grow a planet by completing cognitive mini-games. That premise has a critical implication: people play it everywhere. On the subway between stations. In elevators. In basement cafes with one bar of signal. If the app dropped data every time the network flickered, it would feel broken. Players would lose game scores, subscription status checks would fail, and social features would simply stop working.
So from early on, I built BrainFit around an offline-first architecture. The core idea is simple: never let a network request be the reason a user's action fails. Every mutation goes through an OfflineQueue first, a SyncService flushes that queue when connectivity returns, and critical state like Pro subscription status is cached locally with integrity protection. In this post, I will walk through the actual implementation -- all the code you see here comes directly from the BrainFit codebase.
The Use Case: Subway Gameplay
Seoul's subway system is excellent for commuting, less excellent for mobile data. Between stations, you might have full LTE. Then you enter a tunnel, and for 30 seconds to two minutes, you have nothing. BrainFit's typical gameplay session lasts about 60 seconds. A player might complete an entire game, earn BQ (Brain Quotient) points, update their Elo rating, and contribute to a cooperative mission -- all while the network is dead.
The naive approach would be to just wrap every network call in a try-catch and silently drop failures. But that means the player's BQ score might not sync to Supabase, their friends would not see their activity in the social feed, and their subscription status could not be verified. We need something better.
The architecture I landed on has three pillars:
- OfflineQueue -- a local SQLite table that stores pending actions
- SyncService -- orchestrates when and how to flush the queue
- ConnectivityService -- detects network state changes in real time
Let me dig into each one.
Pillar 1: The OfflineQueue Table
BrainFit uses Drift (formerly Moor) as its SQLite ORM. The OfflineQueue table is deliberately minimal:
/// 오프라인 액션 큐
class OfflineQueue extends Table {
IntColumn get id => integer().autoIncrement()();
TextColumn get actionType => text()();
TextColumn get payload => text()();
DateTimeColumn get createdAt => dateTime().withDefault(currentDateAndTime)();
IntColumn get retryCount => integer().withDefault(const Constant(0))();
}
Four columns plus an auto-incrementing primary key. That is it. The design choices here are intentional:
actionTypeis a string, not an enum. This gives us flexibility to add new action types without a schema migration. Current types includebq_push,feed_push, andsubscription_sync.payloadis a JSON string. Every action type has its own payload shape, and we serialize them withjsonEncode. This keeps the table schema stable -- no matter how many action types we add, the table structure never changes.createdAtdefaults to the current timestamp. This is critical for queue ordering -- items are flushed in FIFO order, which matters for consistency (more on this later).retryCounttracks how many times we have attempted to flush this item. After 3 failures, we consider it stale and remove it.
The DAO Layer
The OfflineQueueDao provides five operations:
@DriftAccessor(tables: [OfflineQueue])
class OfflineQueueDao extends DatabaseAccessor<AppDatabase>
with _$OfflineQueueDaoMixin {
OfflineQueueDao(super.db);
Future<void> enqueue(String actionType, String payload) async {
await into(offlineQueue).insert(OfflineQueueCompanion.insert(
actionType: actionType,
payload: payload,
));
}
Future<List<OfflineQueueData>> getPending() async {
return (select(offlineQueue)
..orderBy([(t) => OrderingTerm.asc(t.createdAt)]))
.get();
}
Future<void> remove(int id) async {
await (delete(offlineQueue)..where((t) => t.id.equals(id))).go();
}
Future<void> incrementRetry(int id) async {
await (update(offlineQueue)..where((t) => t.id.equals(id)))
.write(OfflineQueueCompanion.custom(
retryCount: offlineQueue.retryCount + const Constant(1),
));
}
Future<void> removeStale() async {
await (delete(offlineQueue)
..where((t) => t.retryCount.isBiggerOrEqualValue(3)))
.go();
}
}
A few things to notice:
getPending()orders bycreatedAtascending. This ensures FIFO ordering. If a player earned 100 BQ, then earned 50 more BQ, those updates should be applied in order.incrementRetry()uses Drift's expression-based update to atomically increment the retry count. No read-then-write race condition.removeStale()deletes anything with 3 or more retries. This is the dead-letter queue. Items that fail 3 times are considered unrecoverable -- perhaps the Supabase RLS policy rejected them, or the payload format changed between app versions.
Pillar 2: ConnectivityService
Before we can flush the queue, we need to know whether we are online. BrainFit wraps the connectivity_plus package in a thin service:
class ConnectivityService {
final Connectivity _connectivity = Connectivity();
bool _isOnline = true;
StreamSubscription<List<ConnectivityResult>>? _subscription;
bool get isOnline => _isOnline;
/// 테스트용 수동 설정
void setOnline(bool value) => _isOnline = value;
Future<void> init() async {
final results = await _connectivity.checkConnectivity();
_isOnline = !results.contains(ConnectivityResult.none);
_subscription = _connectivity.onConnectivityChanged.listen((results) {
_isOnline = !results.contains(ConnectivityResult.none);
});
}
void dispose() {
_subscription?.cancel();
}
}
The design is simple on purpose. It initializes by checking the current state, then listens for changes. The setOnline() method exists purely for testing -- it lets unit tests simulate offline scenarios without mocking the platform channel.
One subtlety: connectivity_plus tells you about network interface availability, not actual internet reachability. You could have Wi-Fi connected to a captive portal with no actual internet. In practice, this edge case is rare enough for a mobile game that I chose not to add an HTTP ping check. If the flush fails due to unreachable servers, the retry mechanism handles it. More on that shortly.
Pillar 3: SyncService
The SyncService is the orchestrator. It is surprisingly small:
class SyncService {
final SocialRepository _socialRepo;
final CoopMissionService _coopService;
final ConnectivityService _connectivity;
SyncService(this._socialRepo, this._coopService, this._connectivity);
/// 게임 완료 후 호출 — BQ push + 오프라인 큐 flush
Future<void> onGameComplete({
required String gameId,
required String area,
required int score,
required int totalBq,
required int planetStage,
required Map<String, double> elos,
String? galaxyId,
}) async {
// 1. BQ/Elo → Supabase push
await _socialRepo.pushBqUpdate(
totalBq: totalBq,
planetStage: planetStage,
elos: elos,
);
// 2. 오프라인 큐 flush
await _socialRepo.flushOfflineQueue();
}
/// 앱 foreground 복귀 시 호출
Future<void> onAppResume() async {
if (!_connectivity.isOnline) return;
await _socialRepo.flushOfflineQueue();
await _socialRepo.heartbeat();
}
}
Two trigger points:
- After every game completes --
onGameComplete()pushes the latest BQ and Elo data, then flushes any queued items. Notice thatpushBqUpdate()itself will enqueue if offline, so the flush handles both the new push and any previously queued items. - When the app returns to the foreground --
onAppResume()checks connectivity first. If online, it flushes the queue and sends a heartbeat (updating the user'slast_active_attimestamp for social features).
This two-trigger approach covers the major scenarios. If a player completes a game underground, the BQ push gets queued. When they finish their next game (possibly now above ground), the flush picks up both the old queued push and the new one. If they close the app entirely and reopen later, onAppResume() handles it.
The Flush: Where the Magic Happens
The actual flush logic lives in SocialRepository.flushOfflineQueue(). This is where action types are dispatched:
Future<void> flushOfflineQueue() async {
if (!_isOnline || _userId == null) return;
final items = await _db.offlineQueueDao.getPending();
for (final item in items) {
try {
final data = jsonDecode(item.payload) as Map<String, dynamic>;
switch (item.actionType) {
case 'bq_push':
final uid = data.remove('_user_id') as String?;
if (uid != null) {
await _supabase!
.schema('brainfit')
.from('user_profiles')
.update(data)
.eq('id', uid);
}
case 'feed_push':
await _supabase!
.schema('brainfit')
.from('social_feed')
.insert(data);
case 'subscription_sync':
final user = Supabase.instance.client.auth.currentUser;
if (user != null) {
final tier = await PurchaseService.getCurrentTier();
final info = await Purchases.getCustomerInfo();
String? productId;
String? expiresAt;
if (info.entitlements.active.containsKey('pro')) {
final entitlement = info.entitlements.active['pro']!;
productId = entitlement.productIdentifier;
expiresAt = entitlement.expirationDate;
}
await _supabase!
.schema('brainfit')
.from('user_profiles')
.update({
'subscription_tier': tier.name,
'subscription_expires_at': expiresAt,
'subscription_product_id': productId,
}).eq('id', user.id);
}
default:
break;
}
await _db.offlineQueueDao.remove(item.id);
} catch (_) {
await _db.offlineQueueDao.incrementRetry(item.id);
}
}
await _db.offlineQueueDao.removeStale();
}
Let me break down the interesting decisions here:
Sequential Processing
Items are processed one at a time, in FIFO order. I considered parallel processing for throughput, but sequential ordering guarantees that BQ updates arrive at Supabase in the correct temporal order. If a player went from 500 BQ to 520 to 535, I want the server to see those updates in sequence, not in arbitrary order.
Per-Item Error Handling
Each item has its own try-catch. If item #2 out of 5 fails, items #1, #3, #4, and #5 can still succeed. The failed item gets its retry count incremented, and it will be attempted again on the next flush cycle.
Stale Cleanup
After processing all items, removeStale() deletes anything that has failed 3 or more times. This is a pragmatic choice. If an action cannot succeed after 3 attempts across multiple flush cycles, it is probably broken (bad payload, revoked permissions, etc.). Keeping it in the queue forever would just waste processing time.
The subscription_sync Handler
This handler is interesting because it does not replay a stored payload. Instead, it fetches fresh subscription data from RevenueCat at flush time. This makes sense because subscription status is time-sensitive -- a subscription purchased offline might have already been activated server-side by the time we flush.
The Enqueue Pattern
Every network-dependent mutation in the app follows the same pattern. Here is how pushBqUpdate decides whether to go direct or queue:
Future<void> pushBqUpdate({
required int totalBq,
required int planetStage,
required Map<String, double> elos,
}) async {
final userId = _userId;
if (userId == null) return;
final data = {
'total_bq': totalBq,
'planet_stage': planetStage,
'current_elo': elos,
'last_synced_at': DateTime.now().toUtc().toIso8601String(),
};
if (_isOnline) {
await _supabase!
.schema('brainfit')
.from('user_profiles')
.update(data)
.eq('id', userId);
} else {
await _db.offlineQueueDao
.enqueue('bq_push', jsonEncode({...data, '_user_id': userId}));
}
}
Notice the _user_id field being injected into the payload for offline items. When the action is queued offline, we do not have a guaranteed Supabase session, so we store the user ID explicitly. At flush time, the handler extracts and removes _user_id from the payload before sending the update.
RetryWithBackoff: Handling Transient Failures
Not all network operations go through the OfflineQueue. Some are more time-sensitive and benefit from immediate retry. For these, BrainFit uses an exponential backoff utility:
static Future<bool> retryWithBackoff(
Future<void> Function() action, {
int maxRetries = 3,
Duration initialDelay = const Duration(seconds: 1),
}) async {
for (var i = 0; i < maxRetries; i++) {
try {
await action();
return true;
} catch (e) {
debugPrint('retryWithBackoff attempt ${i + 1}/$maxRetries failed: $e');
if (i < maxRetries - 1) {
await Future.delayed(initialDelay * (1 << i));
}
}
}
return false;
}
The delays follow a standard exponential pattern: 1 second, 2 seconds, 4 seconds. The function returns a boolean so callers can decide what to do after all retries are exhausted.
This is used for subscription synchronization, where immediate delivery matters:
static Future<void> syncSubscriptionToSupabase() async {
try {
final user = Supabase.instance.client.auth.currentUser;
if (user == null) return;
} catch (_) {
return;
}
final success = await retryWithBackoff(() async {
final user = Supabase.instance.client.auth.currentUser;
if (user == null) return;
final tier = await getCurrentTier();
final info = await Purchases.getCustomerInfo();
// ... build update payload ...
await Supabase.instance.client
.schema('brainfit')
.from('user_profiles')
.update({
'subscription_tier': tier.name,
'subscription_expires_at': expiresAt,
'subscription_product_id': productId,
}).eq('id', user.id);
});
if (!success && _db != null) {
try {
await _db!.offlineQueueDao.enqueue('subscription_sync', '{}');
debugPrint('PurchaseService: subscription sync queued for retry');
} catch (e) {
debugPrint('PurchaseService: failed to queue subscription sync: $e');
}
}
}
This is a two-tier retry strategy: first, try with exponential backoff for immediate resolution. If all 3 attempts fail, fall back to the OfflineQueue for eventual delivery. This pattern is appropriate for subscription sync because users who just purchased Pro expect it to be reflected on the server quickly, but if the network is truly down, eventual consistency through the queue is acceptable.
Subscription Caching: Offline Pro Access
Here is a scenario that keeps me up at night: a Pro subscriber opens BrainFit on a subway, and the app cannot reach RevenueCat to verify their subscription. Do we downgrade them to Free and lock out Pro games? That would be a terrible user experience.
BrainFit solves this with a SubscriptionCache that stores the subscription state locally with HMAC-SHA256 integrity protection:
class SubscriptionCache {
static const _tierKey = 'sub_cache_tier';
static const _productIdKey = 'sub_cache_product_id';
static const _expirationKey = 'sub_cache_expiration';
static const _hmacKey = 'sub_cache_hmac';
static const _salt = 'brainfit_sub_cache_v1_7x2m';
// ...
String _computeHmac(String tier, String productId, String expiration) {
final hmac = Hmac(sha256, utf8.encode(_salt));
return hmac.convert(utf8.encode('$tier:$productId:$expiration')).toString();
}
Future<void> save({
required SubscriptionTier tier,
required String? productId,
required String? expirationDate,
}) async {
_tier = tier.name;
_productId = productId;
_expirationDate = expirationDate;
// ... save to SharedPreferences with HMAC ...
}
SubscriptionTier? getCachedTier() {
if (_tier == null) return null;
if (_expirationDate != null) {
final expiry = DateTime.tryParse(_expirationDate!);
if (expiry != null && DateTime.now().isAfter(expiry)) {
return null;
}
}
return SubscriptionTier.values
.where((t) => t.name == _tier)
.firstOrNull;
}
}
The flow works like this:
- Every time
getCurrentTier()successfully contacts RevenueCat, it saves the result to the cache. - If RevenueCat is unreachable (offline), the catch block reads from the cache.
- The cache checks expiration -- if the subscription has expired according to the stored date, it returns null (Free).
- The HMAC prevents casual tampering with SharedPreferences values.
static Future<SubscriptionTier> getCurrentTier() async {
if (AppConfig.revenueCatApiKey.isEmpty) return SubscriptionTier.free;
try {
final info = await Purchases.getCustomerInfo();
if (info.entitlements.active.containsKey(_proEntitlement)) {
final entitlement = info.entitlements.active[_proEntitlement]!;
final pid = entitlement.productIdentifier;
final tier = (pid == familyMonthly || pid == familyAnnual)
? SubscriptionTier.family
: SubscriptionTier.pro;
// 캐시에 저장
_subscriptionCache?.save(
tier: tier,
productId: pid,
expirationDate: entitlement.expirationDate,
);
return tier;
}
_subscriptionCache?.clear();
} catch (e, st) {
// 오프라인 fallback: 캐시에서 읽기
final cached = _subscriptionCache?.getCachedTier();
if (cached != null) {
return cached;
}
}
return SubscriptionTier.free;
}
Is the HMAC unbreakable? No. A determined attacker with a rooted device could extract the salt from the APK and forge the cache. But BrainFit is a brain training game, not a banking app. The HMAC raises the bar enough to prevent casual tampering with a SharedPreferences editor, which is the realistic threat model for a mobile game.
Supabase Anonymous Auth and RLS
BrainFit uses Supabase anonymous authentication. When the app starts, it calls signInAnonymously() on Supabase, which creates a session with a real auth.uid(). This means that even users who never created an account still have a persistent identity that Row Level Security can use.
All Supabase tables in the brainfit schema have RLS policies based on auth.uid(). For example, the user_profiles table has an update policy like:
CREATE POLICY "Users can update own profile"
ON brainfit.user_profiles
FOR UPDATE
USING (auth.uid() = id);
This works for anonymous users too because auth.uid() returns a valid UUID for anonymous sessions. The security boundary is that users can only read and write their own data, regardless of whether they are "real" (email/OAuth) users or anonymous ones.
This design choice matters for the offline queue because it means every queued action carries the user's Supabase ID. When the queue flushes, the RLS policies still apply -- a queued bq_push action can only update the row belonging to the user who created it.
Edge Cases
Conflict Resolution
BrainFit uses a last-write-wins strategy for most data. BQ updates include a last_synced_at timestamp, so if two updates arrive out of order, the server accepts them both but the most recent one naturally wins because it overwrites the previous value. For use cases like social feed entries (which are inserts, not updates), there is no conflict -- each entry gets its own row.
I considered adding server-side conflict detection with version vectors, but for a mobile game, the complexity was not justified. The worst case of a BQ conflict is that a player's displayed score might be off by a few points briefly before the next sync corrects it. That is acceptable.
Queue Ordering
getPending() orders by createdAt ascending, and items are processed sequentially. This ensures causal ordering: if action A happened before action B, A will be flushed first. Combined with the autoIncrement primary key as a tiebreaker (Drift guarantees insertion order for same-timestamp rows), the ordering is deterministic.
Max Retries and the Dead-Letter Pattern
The retryCount field acts as a poor man's dead-letter queue. After 3 failed attempts, removeStale() deletes the item. I chose 3 because each flush cycle represents a significant time gap (at least one game completion or app resume), so 3 retries means the system has tried across multiple sessions.
What happens to the lost data? For BQ pushes, the next successful pushBqUpdate will write the current cumulative BQ, effectively catching up. For feed events, the lost entry simply will not appear in friends' feeds -- acceptable for a social feed that is not mission-critical. For subscription syncs, the server-side RevenueCat webhook is the source of truth anyway.
What If the Queue Grows Too Large?
In theory, a player could be offline for days and accumulate dozens of queued items. In practice, BrainFit's play limit system (3 games per day for Free users, unlimited for Pro) caps the queue growth. Even a Pro user playing heavily might generate 20-30 items per day. Processing those sequentially takes under a second on a modern phone.
Lessons Learned
Start simple. The OfflineQueue table has just 5 columns. I was tempted to add priority levels, TTL fields, and batch processing from the start. None of those turned out to be necessary. The simple FIFO queue with retry counts handles everything BrainFit needs.
Separate detection from action. ConnectivityService only detects state. SyncService decides what to do with that state. This separation makes testing straightforward -- you can set connectivity.setOnline(false) and verify that the queue grows without any network mocking.
Cache aggressively, but with integrity. The subscription cache uses HMAC not because I expected sophisticated attacks, but because it costs almost nothing to add and prevents the most common tampering vector (SharedPreferences editors on rooted devices).
Two-tier retry is pragmatic. For time-sensitive operations like subscription sync, try immediately with backoff. If that fails, fall back to the queue for eventual delivery. This gives you the best of both worlds without overcomplicating either system.
Accept eventual consistency. BrainFit is a game, not a bank. If a player's BQ is out of sync for a few minutes, nobody notices. Designing for eventual consistency let me build a much simpler system than strong consistency would have required.
The Architecture Diagram
If I were to draw the data flow, it would look like this:
Game Complete
│
▼
pushBqUpdate()
│
├── Online? ──→ Supabase (direct)
│
└── Offline? ──→ OfflineQueue (SQLite)
│
▼
Next trigger:
- onGameComplete()
- onAppResume()
│
▼
flushOfflineQueue()
├── Success → remove from queue
└── Failure → incrementRetry()
│
└── retryCount >= 3?
→ removeStale()
This pattern has served BrainFit well through beta testing with users across Seoul's subway system. Zero data loss reports, zero "my score disappeared" complaints. The system is not clever -- it is just careful about the basics: persist everything locally, sync when you can, retry when you fail, and give up gracefully when something is truly broken.
If you are building a Flutter app that needs to work in unreliable network conditions, I hope this gives you a concrete starting point. The OfflineQueue pattern is not novel, but the devil is in the details -- and those details are what make the difference between an app that "should work offline" and one that actually does.