Spotify Mobile Engineer (iOS_Android)

Spotify Mobile Engineer (iOS_Android)

Spotify Mobile Engineer (iOS/Android)

This guide features 10 challenging Mobile Engineer interview questions for Spotify (Senior to Staff levels), covering custom UI rendering, background execution, offline synchronization, and modular architecture aligned with Spotify's mission to provide a seamless listening experience anywhere.

1. The "Infinite Scroll" Performance - Rendering Complex Views

Difficulty Level: High

Role: Senior Mobile Engineer (iOS/Android)

Source: Spotify Engineering (Mobile Infrastructure)

Topic: UI Performance & Memory Management

Interview Round: Mobile System Design (60 minutes)

Business Function: Core Experience (Home Screen)

Question:

"The Spotify Home screen is a complex list of nested horizontal carousels, videos, and dynamic cards. Users on older Android devices/iPhones report 'Jank' (dropped frames) when scrolling quickly.

1. How do you profile this to find the bottleneck?

2. The server sends a massive JSON payload. How do you parse and render this without blocking the Main Thread?

3. Design a recycling strategy for nested views (Horizontal Recycler inside Vertical Recycler) to minimize memory footprint."

Answer Framework

STAR Method Structure:

Situation: "Scroll Jank" ruins the premium feel of the app and correlates with lower session times.

Task: Achieve a consistent 60fps (16ms per frame) on low-end devices.

Action: Moved JSON parsing and image decoding to background threads. Implemented a RecycledViewPool shared across nested adapters.

Result: Reduced dropped frames by 90% and memory usage by 30%.

Key Competencies Evaluated:

Concurrency: Offloading work from the UI Thread.

View Recycling: Understanding how RecyclerView (Android) or UICollectionView (iOS) reuse cells.

View Hierarchy: Flattening layouts to reduce measure/layout pass time.

Answer (Part 1 of 3): Profiling & Diagnosis

Tools: I would use Systrace/Profiler (Android) or Instruments (iOS Time Profiler).

Signals: Look for "Choreographer: Skipped frames" logs. Check if cellForRow / onBindViewHolder is taking >16ms.

Common Culprit: Often, it's not the rendering, but "Overdraw" (drawing the same pixel 3 times) or doing string formatting/Date parsing inside the scroll loop.

Answer (Part 2 of 3): Asynchronous Data Prep

The Problem: Parsing a 50KB JSON blob on the Main Thread causes a freeze. ● Solution: Use a ViewModel to parse data on an IO thread.

Diffing: Calculate the "Diff" (DiffUtil/IGListKit) on a background thread. Only dispatch the final "Insert/Update/Delete" commands to the UI thread. This ensures the Main Thread only updates views, never processes logic.

Answer (Part 3 of 3): Shared View Pools (The Nested Problem)

Scenario: Each horizontal carousel (e.g., "Recently Played", "Rock Mix") usually creates its own pool of views. When you scroll down, you destroy row 1 and create row 5. ● Optimization: Use a Shared RecycledViewPool.

Logic: Since "Album Cards" look the same in Row 1 and Row 10, we share the pool.

When Row 1 scrolls off-screen, its cards are put into the global pool and immediately reused by Row 10. This prevents the "Creation Spike" (GC Alloc) during scrolling.

2. The "Offline Mode" Engine - Synchronization & Storage

Difficulty Level: Very High

Role: Staff Mobile Engineer

Source: Spotify Core Experience

Topic: Data Consistency & Storage

Interview Round: System Design (60 minutes)

Business Function: Offline Experience

Question:

"A user toggles 'Download' on a playlist with 2,000 songs (10GB). They have spotty Wi-Fi and limited disk space.

1. Design the download manager architecture. How do you handle prioritization and partial failures?

2. How do you ensure DRM (Digital Rights Management) security so users can't just copy the MP3s?

3. The user runs out of disk space at 95%. How does the system recover gracefully?"

Answer Framework

STAR Method Structure:

Situation: Downloading gigabytes of data on mobile is fragile (network drops, battery death).

Task: Build a robust Sync Engine that prioritizes user intent and system health.

Action: Architected a Job Queue system with exponential backoff. implemented encrypted local storage (AES) with key rotation.

Result: Increased download success rate from 85% to 99%; handled out-of-space errors without corrupting the database.

Key Competencies Evaluated:

State Management: Handling "Pausing," "Resuming," and "Retrying." ● Security: Encryption at rest.

File I/O: Efficient disk writing without blocking the UI.

Answer (Part 1 of 3): The Sync Engine Architecture

Component: A persistent Foreground Service (Android) or Background Session (iOS) that survives app termination.

Queue Logic:

Priority: Explicit user downloads > Automatic caching.

Batching: Do not download 2,000 files in parallel. Download 3 at a time to maximize throughput and minimize fragmentation.

State Machine: Each track has states: IDLE -> DOWNLOADING -> ENCRYPTING -> COMPLETED.

Answer (Part 2 of 3): DRM & Security

No MP3s: We never store raw audio files.

Chunking: The audio is split into binary chunks.

Encryption: Each chunk is encrypted (AES-256). The decryption key is stored in the device's Keystore/Keychain (Hardware-backed security).

Playback: The player decrypts chunks in memory on the fly. The decrypted file never touches the disk.

Answer (Part 3 of 3): Handling Disk Full

Proactive: Check Disk.freeSpace()before starting a chunk.

Reactive: If an IOException (No space) occurs:

1. Pause the queue.

2. LRU Eviction: Check if there is "Old Cache" (songs streamed but not explicitly downloaded) and delete them to free space.

3. User Prompt: If still full, show a "Storage Full" toast and pause the download state until space is freed. Do not crash.

3. Background Audio - Keeping the Music Alive

Difficulty Level: High

Role: Mobile Engineer (Core Player)

Source: Mobile Platform Team

Topic: OS Lifecycle & Background Execution

Interview Round: Domain Expertise (45 minutes)

Business Function: Playback

Question:

"Both iOS and Android aggressively kill background apps to save battery.

1. How do you architect the player to ensure music keeps playing when the user locks the phone or opens a game?

2. How do you handle 'Audio Focus' (e.g., Google Maps speaks directions)? 3. How do you keep the 'Now Playing' notification in sync with the actual audio state?"

Answer Framework

STAR Method Structure:

Situation: The OS views background apps as battery drains. Audio apps are the exception, but they must follow strict rules.

Task: Maintain playback continuity and correct lock-screen metadata.

Action: Implemented AVAudioSession (iOS) and MediaBrowserService (Android). Handled Audio Focus interrupts (Ducking vs. Pausing).

Result: Zero playback interruptions during backgrounding; seamless integration with external hardware (Bluetooth headphones).

Key Competencies Evaluated:

OS Internals: Understanding Service lifecycle and Background Modes. ● Inter-Process Communication (IPC): Talking to the OS media controller.

Interrupt Handling: Managing transient vs. permanent audio loss.

Answer (Part 1 of 3): The Background Architecture

Android: Use a Foreground Service. This shows a visible notification to the user, telling the OS "This app is important, do not kill it."

iOS: Enable the "Audio, AirPlay, and Picture in Picture" background mode capability. Configure the AVAudioSession category to Playback.

Separation: The UI Activity can die (user swipes away), but the Player Service must remain bound to the Application Context.

Answer (Part 2 of 3): Audio Focus & Ducking

Scenario: User is listening to music. Google Maps says "Turn Left." ● Transient Loss (Can Duck): We receive

AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK.

Action: We lower the volume to 30% (Duck) but keep playing. When focus returns, we ramp volume back to 100%.

Permanent Loss: User opens YouTube. We receive AUDIOFOCUS_LOSS.

Action: We must Pause immediately.

Answer (Part 3 of 3): Syncing State (The "Truth")

Problem: The UI says "Playing," but the Audio Engine stopped.

Solution: The Audio Engine is the Source of Truth.

Observer Pattern: The UI and the Notification are passive observers. They subscribe to a PlayerState stream (RxJava/Combine).

Flow: Engine stops -> Emits PAUSED event -> UI updates Play button -> Notification updates. Never the other way around.

4. Modular Architecture - Decoupling Features

Difficulty Level: High

Role: Senior Mobile Engineer

Source: Spotify Engineering Culture (Squads)

Topic: Architecture & Scalability

Interview Round: System Design (60 minutes)

Business Function: Developer Experience

Question:

"We have 100+ mobile engineers working on the same app. The 'Search' team and the 'Player' team should not break each other's code.

1. How do you modularize the app to ensure separation of concerns?

2. How do you handle navigation between modules (e.g., Search -> Player) without circular dependencies?

3. Explain how you would implement 'Feature Flags' to turn off a module remotely if it crashes."

Answer Framework

STAR Method Structure:

Situation: A monolithic codebase led to slow build times (20 mins) and constant merge conflicts.

Task: Break the monolith into independent Feature Modules.

Action: Adopted a Dependency Injection graph and a Navigation Router pattern.

Result: Reduced build time to 2 minutes (incremental); enabled independent testing and release of features.

Key Competencies Evaluated:

Dependency Management: Gradle Modules / Swift Packages. ● Design Patterns: Coordinator Pattern / Router.

CI/CD: Feature flagging infrastructure.

Answer (Part 1 of 3): The Module Graph

Structure:

App Module: The thin shell that puts everything together.

Core Modules (API, Analytics, UI Kit): Shared libraries that everyone uses. ○ Feature Modules (Search, Home, Library): Independent silos.

Rule: Feature modules cannot depend on each other. Search cannot import Library. They can only depend on Core.

Answer (Part 2 of 3): Navigation (The Router)

Problem: Search needs to open the Player, but doesn't know the Player class exists. ● Solution:Deep Link / Router Pattern.

Implementation: Search asks the Router: navigator.open("spotify:player:track:123"). ● Registry: The App Module registers the mapping: "spotify:player" -> PlayerActivity. This decouples the implementation from the invocation.

Answer (Part 3 of 3): Feature Flagging

Mechanism: Every module entry point is wrapped in a FeatureGuard.

Logic: On app launch, we fetch a JSON config: {"enable_lyrics_module": false}. ● Safety: If the Lyrics module causes a crash loop, Ops updates the config on the server.

The app receives the update, and the FeatureGuard hides the entry point (e.g., removes the Lyrics button) instantly, preventing the crash without an App Store update.

5. Image Loading at Scale - The "Album Art" Problem

Difficulty Level: Medium-High

Role: Mobile Engineer

Source: Mobile Infrastructure

Topic: Networking & Caching

Interview Round: Technical Coding (45 minutes)

Business Function: Core UI

Question:

"The 'Browse' page loads 50 album covers simultaneously.

1. If you naively load all 50, you'll kill the network and memory. How do you optimize this?

2. Design a multi-tier cache (Memory + Disk) for images.

3. How do you handle the 'Wrong Image' bug (scrolling fast results in the wrong album art appearing on a cell)?"

Answer Framework

STAR Method Structure:

Situation: Grid views with heavy imagery caused Out-Of-Memory (OOM) crashes and flickering.

Task: Build an efficient Image Loader library.

Action: Implemented an LRU Memory Cache, Request Cancellation on scroll, and Bitmap Downsampling.

Result: Zero OOMs; smooth scrolling with instant visual feedback from cache.

Key Competencies Evaluated:

Memory Management: Bitmaps are heavy. Understanding ARGB_8888 vs RGB_565. ● Race Conditions: Handling async callbacks in recycled views.

Caching Strategy: LRU (Least Recently Used) algorithms.

Answer (Part 1 of 3): Request Optimization

Downsampling: Never load a 2000x2000px image into a 100x100px thumbnail view. Decode only the bounds needed (inJustDecodeBounds).

Prioritization: Pause image requests while the user is "Flinging" (scrolling super fast).

Only resume requests when the scroll state settles to "Idle" or "Slow Drag."

Cancellation: When a View is recycled (scrolled off screen), immediately cancel the pending network request for that view to save bandwidth.

Answer (Part 2 of 3): The "Wrong Image" Bug (Race Condition)

Scenario:

○ Cell A requests Image 1.

○ User scrolls. Cell A is recycled to become Cell B.

○ Cell B requests Image 2.

○ Network returns Image 1 (slow response).

○ Cell B displays Image 1 (Wrong!).

Fix:Tagging.

○ When making a request, set view.setTag(url).

○ In the callback, check: if (view.getTag() == url) { setImage() } else { discard }.

Answer (Part 3 of 3): Caching Layers

L1 (Memory):LRU Cache. fast access. Stores the Bitmap. Size = 1/8th of available app memory.

L2 (Disk): Stores the compressed file (JPG/PNG). Persistent across app launches. ● L3 (Network): The source of truth.

Lookup: Check L1 -> Check L2 -> Fetch L3 -> Write to L2 -> Write to L1 -> Display.Here are 5 additional challenging Mobile Engineer interview questions for Spotify, continuing from the previous set (numbered 6-10).

6. Adaptive Streaming - Managing the Buffer

Difficulty Level: Very High

Role: Staff Mobile Engineer (Core Audio)

Source: Spotify Audio Experience Team

Topic: Networking & ExoPlayer/AVPlayer

Interview Round: Domain Expertise (60 minutes)

Business Function: Playback Infrastructure

Question:

"A user is listening to a song on high-speed Wi-Fi, then walks out the door and switches to a weak 3G connection.

1. How do you design the buffer strategy to prevent the music from stopping (stuttering) during this switch?

2. Explain the logic for Adaptive Bitrate Streaming. When do you switch from 320kbps to 96kbps?

3. If the buffer is empty and the network is slow, do you prioritize downloading the next 10 seconds of the current song, or the first 5 seconds of the next song (gapless playback)?"

Answer Framework

STAR Method Structure:

Situation: Network volatility is the #1 cause of user churn in streaming apps.

Task: Ensure continuous playback even when bandwidth drops by 90%.

Action: Implemented a Dual-Threshold Buffer strategy and a pessimistic bandwidth estimator.

Result: Reduced "Re-buffering Events" by 40% while maintaining "High Quality" audio for 80% of the session.

Key Competencies Evaluated:

Streaming Protocols: HLS vs. DASH vs. Progressive Download. ● Bandwidth Estimation: Moving averages vs. instantaneous throughput.

Trade-offs: Audio Quality vs. Playback Continuity.

Answer (Part 1 of 3): Dual-Threshold Buffering

Strategy: We don't just "fill the buffer." We use thresholds.

Start Threshold: We need 2s of audio to start playing.

Rebuffer Threshold: If the buffer drops below 5s, we aggressively request chunks.

Cap: We stop downloading at 30s to save data (in case the user skips).

Network Switch: When Wi-Fi drops, the OS takes ~2s to handshake 3G. Our 30s buffer covers this "dead air" gap easily.

Answer (Part 2 of 3): Adaptive Bitrate Logic

Algorithm: We track throughput_kbps using an exponential moving average. ● Rule:If (throughput < current_bitrate * 1.5), downgrade quality.

Hysteresis: We don't switch immediately on a single bad packet. We wait for a trend (e.g., 3 consecutive bad chunks) to prevent "Quality Thrashing" (switching

High->Low->High constantly).

Seamless Switch: We switch quality at the next chunk boundary, not mid-chunk.

Answer (Part 3 of 3): Pre-Caching vs. Current Buffer

Priority:Current Song Continuity is King.

Logic: If the current buffer is < 10s, dedicate 100% bandwidth to the current song. ● Gapless: Only when the current buffer > 15s do we start a background thread to fetch the header (first 5s) of the next song. This ensures the transition is instant, but never at the risk of stopping the current track.

7. "Vampire Apps" - Battery Optimization

Difficulty Level: High

Role: Senior Mobile Engineer

Source: Performance Team

Topic: Energy Profiling & Wake Locks

Interview Round: Technical Deep Dive (45 minutes)

Business Function: Core Experience

Question:

"Users are complaining that Spotify drains 20% of their battery in the background, even when paused.

1. What are the likely technical causes of 'Battery Drain' when an app is supposedly idle?

2. How do you manage Wake Locks to ensure the CPU sleeps when music stops?

3. How would you schedule a 'Daily Mix Update' (background sync) without killing the battery?"

Answer Framework

STAR Method Structure:

Situation: High battery usage leads to uninstalls.

Task: Identify and fix "Stuck Wake Locks" and inefficient background jobs.

Action: Audited the codebase for unreleased locks; migrated all non-critical syncs to WorkManager (Android) / BGTaskScheduler (iOS).

Result: Reduced background battery consumption by 60%.

Key Competencies Evaluated:

Hardware Control: CPU Wake Locks vs. Screen Locks.

Job Scheduling: Batching network calls.

Profiling Tools: Battery Historian (Android) / Energy Organizer (Xcode).

Answer (Part 1 of 3): The "Stuck Lock" Diagnosis

The Culprit: A Partial Wake Lock (Android) keeps the CPU humming even if the screen is off.

Scenario: The app acquired a lock to download a song, but the network timed out, and the finally { lock.release() } block was skipped or the logic was flawed.

Fix: Use try-with-resources or strict timeouts (e.g., acquire(10mins)). Never acquire a lock without a failsafe timeout.

Answer (Part 2 of 3): Playback State Management

Logic: The Audio Engine has a state machine.

Transition:PLAYING -> PAUSED.

Action: On PAUSED, we must immediately release the WifiLock and WakeLock. ● Edge Case: If the user pulls out headphones, the OS pauses the audio. We must listen for the AUDIO_BECOMING_NOISY broadcast to trigger the lock release immediately.

Answer (Part 3 of 3): Efficient Scheduling (WorkManager)

Bad Pattern: Using an AlarmManager to wake up exactly at 3:00 AM. This wakes the radio and CPU alone.

Good Pattern: Use WorkManager / BGTaskScheduler.

Constraints: Set requiresCharging = true and requiresUnmeteredNetwork = true. ● OS Optimization: The OS will "batch" our job with other apps (e.g., Gmail sync). It wakes the radio once, flushes all data for all apps, and goes back to sleep. This is 10x more efficient.

8. Accessibility (A11y) - designing for Everyone

Difficulty Level: Medium-High

Role: Product Engineer (UI Focus)

Source: Design Systems Team

Topic: Accessibility & Inclusive Design

Interview Round: Product/UI (45 minutes)

Business Function: UI Platform

Question:

"We need to ensure Spotify is fully usable by blind users via Screen Readers (TalkBack/VoiceOver).

1. The 'Play' button is an icon (triangle) with no text. How does the Screen Reader know what to say?

2. We have a complex 'Equalizer' graph that users can drag. How do you make this custom view accessible?

3. How do you support 'Dynamic Type' (Large Fonts) without breaking the layout of the 'Now Playing' screen?"

Answer Framework

STAR Method Structure:

Situation: Visual-heavy apps like Spotify are often unusable for the visually impaired. ● Task: Achieve WCAG 2.1 compliance.

Action: Added semantic content descriptions; implemented "Accessibility Actions" for custom views.

Result: Increased Monthly Active Users (MAU) in the accessibility segment by 15%; improved app rating.

Key Competencies Evaluated:

Platform APIs:contentDescription (Android), accessibilityLabel (iOS). ● Custom Views: Exposing internal state to accessibility services. ● Layout Fluidity:ConstraintLayout vs. fixed frames.

Answer (Part 1 of 3): Content Descriptions

The Basics: Visual icons must have a hidden label.

Stateful Labels: A static "Play" label is wrong. It must change based on state.

○ If Playing: Label = "Pause".

○ If Paused: Label = "Play".

Grouping: Don't let the reader focus on "Song Title" then "Artist" separately. Group them into a single container so it reads: "Song X by Artist Y."

Answer (Part 2 of 3): Custom Views (The Equalizer)

Problem: A canvas drawing of a graph is invisible to VoiceOver.

Solution: Implement the Accessibility Node Provider.

Virtual View Hierarchy: We tell the OS, "This view contains 5 virtual sliders (Bass, Mid, Treble)."

Actions: We map physical gestures (swipe up/down) to logical actions

(ACTION_SCROLL_FORWARD -> Increase Bass). The Screen Reader announces "Bass, 50%... Bass, 60%."

Answer (Part 3 of 3): Dynamic Type & Layouts

Constraint: The "Now Playing" screen has limited space.

Strategy:

1. Reflow: If the font size > 200%, switch the layout from Horizontal (Image left, Text right) to Vertical (Image top, Text bottom).

2. Ellipsize: If text is still too long, use Marquee (scrolling text) rather than cutting it off.

3. Vector Graphics: Ensure icons scale with the text so the Play button doesn't look tiny next to giant text.

9. Automated Testing - fighting Flakiness

Difficulty Level: High

Role: Senior Mobile Engineer (Productivity)

Source: Developer Experience Team

Topic: CI/CD & Test Automation

Interview Round: Engineering Culture (45 minutes)

Business Function: Quality Engineering

Question:

"We run 5,000 UI tests on every Pull Request. The suite takes 4 hours and is 10% flaky (fails randomly).

1. How do you architect the test suite to run in < 20 minutes?

2. How do you identify and handle 'Flaky Tests' automatically?

3. Explain the difference between Snapshot Testing and functional UI Testing.

When would you use each?"

Answer Framework

STAR Method Structure:

Situation: Slow, flaky CI pipelines caused developer frustration and merged bugs. ● Task: reduce build time to <20m and flakiness to <1%.

Action: Parallelized execution (Sharding); implemented "Flaky Retry" logic; moved pixel-perfect checks to Snapshot Tests.

Result: PR feedback loop dropped to 15 mins; developers trust the green checkmark again.

Key Competencies Evaluated:

Test Pyramid: Unit vs. Integration vs. E2E.

Infrastructure: Sharding & Parallelization.

Tooling: Espresso/XCTest vs. Paparazzi/SnapshotTesting.

Answer (Part 1 of 3): Sharding & Parallelization

Problem: Running 5,000 tests sequentially on 1 emulator takes forever.

Solution:Sharding.

Implementation: We spin up 50 emulators in the cloud (Firebase Test Lab / AWS Device Farm).

Logic: We split the test suite into 50 chunks. Each emulator runs 100 tests in parallel. Total time = Time of longest chunk (e.g., 10 mins).

Answer (Part 2 of 3): Handling Flakiness

Detection: If a test Fails, immediately Retry it 2 more times in the same run. ○ Pass -> Fail -> Fail = True Failure (Block PR).

○ Fail -> Pass -> Pass = Flaky (Merge PR, but flag test).

Quarantine: A script analyzes the history. If Test_A is flaky > 5% of the time, it is automatically moved to a "Quarantine" folder. It runs but doesn't block merges until a human fixes it.

Answer (Part 3 of 3): Snapshot vs. Functional

Snapshot Testing (Paparazzi/iOSSnapshotTestCase): Renders a view to a PNG and compares it pixel-by-pixel to a "Golden Master."

Use for: Checking padding, fonts, dark mode colors. fast & deterministic. ● Functional UI Testing (Espresso/XCUITest): Simulates clicks and navigation. ○ Use for: "Click Login -> Verify Home Screen appears." Slow & brittle.

Strategy: Move 80% of "Visual Checks" to Snapshots to speed up the suite.

10. Binary Size - The "100MB Limit"

Difficulty Level: Medium-High

Role: Mobile Platform Engineer

Source: Release Engineering

Topic: App Size Optimization

Interview Round: System Design (45 minutes)

Business Function: Growth (Emerging Markets)

Question:

"Our app size has ballooned to 150MB. In markets like India/Brazil, users uninstall apps >100MB to save space.

1. How do you analyze the APK/IPA to find the fat?

2. We have 50MB of assets (images/fonts). How do we reduce this without looking ugly?

3. Explain App Bundles (Android) or App Thinning (iOS). How does it help?"

Answer Framework

STAR Method Structure:

Situation: App size crossed the critical threshold, hurting conversion rates in emerging markets.

Task: Reduce download size by 40% without removing features.

Action: Enabled R8/ProGuard shrinking; converted PNGs to WebP/AVIF; adopted Dynamic Delivery.

Result: Download size dropped to 85MB; installs increased by 10%.

Key Competencies Evaluated:

Build Tools: ProGuard/R8, Linker flags.

Image Formats: WebP/AVIF vs PNG.

Dynamic Delivery: Downloading features on demand.

Answer (Part 1 of 3): Analyzing the Artifact

Tools:APK Analyzer (Android Studio) / GrandPerspective (macOS) on the unzipped IPA.

What to look for:

1. Duplicate Classes: Libraries that include the same dependencies (e.g., two versions of OkHttp).

2. Resources: Giant background images or unused font files.

3. Native Libs (.so): Are we shipping x86 libs to production phones (which are mostly ARM)?

Answer (Part 2 of 3): Asset Optimization

Vector Drawables: Replace bitmaps with Vectors (SVG/XML) wherever possible. They are tiny and scale infinitely.

WebP: Convert all remaining PNGs to WebP. It offers 30% compression gain with identical quality.

On-Demand Resources: Move non-critical assets (e.g., the "Onboarding Tour" animations) to the server. Download them only when the user actually starts the tour.

Answer (Part 3 of 3): App Bundles / Thinning

Concept: We upload a "Master File" to the Store.

Mechanism:

Android App Bundle (.aab): Google Play generates a custom APK for that specific user.

Slicing: A user with an ARM64 processor and an XXHDPI screen gets only the ARM64 code and only the XXHDPI images. They don't download the Tablet layouts or French language strings if they are in the US.

Impact: This typically reduces download size by ~50% instantly compared to a "Universal APK."