Salesforce Developer Interview Questions
Senior Engineering Manager — Salesforce Developer Hiring Pack
Question 1 — Bulk Trigger Architecture & Governor Limits
Domain: Apex Programming · Salesforce Triggers · Governor Limits
Difficulty: Hard
Expected Interview Time: 15–20 minutes
Scenario
Your company processes large insurance claim batches overnight. A business requirement states: "When a Claim record is updated to Status = Approved, automatically create a related Payment record and send an email notification to the policyholder."
A junior developer has implemented this using a trigger that fires per-record and contains a SOQL query and DML operation inside the loop. During UAT, a batch of 300 claims was approved simultaneously and the org threw System.LimitException: Too many SOQL queries: 101.
You've been asked to redesign this trigger end-to-end.
Model Answer
Step-by-step reasoning:
- Identify the anti-patterns in the existing code: SOQL inside loops, DML inside loops, no handler class separation.
- Apply the Trigger Handler pattern — one trigger per object, logic delegated to an Apex handler class.
- Bulkify the logic — collect all relevant record IDs into a
Set, run a single SOQL query outside the loop, then process the map of results.
- Batch DML — accumulate new
Payment__crecords in aList, then perform a singleinsertafter the loop.
- Use
Messaging.sendEmailMessagewith a list ofMessaging.SingleEmailMessageobjects to batch email sends.
Salesforce-specific implementation:
// Claim_Trigger.trigger
trigger ClaimTrigger on Claim__c (after update) {
ClaimTriggerHandler.handleAfterUpdate(Trigger.new, Trigger.oldMap);
}
// ClaimTriggerHandler.cls
public class ClaimTriggerHandler {
public static void handleAfterUpdate(
List<Claim__c> newClaims,
Map<Id, Claim__c> oldMap
) {
List<Claim__c> approvedClaims = new List<Claim__c>();
for (Claim__c c : newClaims) {
if (c.Status__c == 'Approved' &&
oldMap.get(c.Id).Status__c != 'Approved') {
approvedClaims.add(c);
}
}
if (approvedClaims.isEmpty()) return;
// Single SOQL — fetch related Policyholder contact info
Set<Id> claimIds = new Map<Id, Claim__c>(approvedClaims).keySet();
Map<Id, Claim__c> claimMap = new Map<Id, Claim__c>(
[SELECT Id, Policy__r.Policyholder__r.Email,
Claim_Amount__c
FROM Claim__c
WHERE Id IN :claimIds]
);
List<Payment__c> payments = new List<Payment__c>();
List<Messaging.SingleEmailMessage> emails =
new List<Messaging.SingleEmailMessage>();
for (Claim__c c : approvedClaims) {
Claim__c enriched = claimMap.get(c.Id);
// Batch DML list
payments.add(new Payment__c(
Claim__c = c.Id,
Amount__c = enriched.Claim_Amount__c,
Status__c = 'Pending'
));
// Batch email list
Messaging.SingleEmailMessage mail =
new Messaging.SingleEmailMessage();
mail.setToAddresses(
new String[]{enriched.Policy__r.Policyholder__r.Email});
mail.setSubject('Your Claim Has Been Approved');
mail.setPlainTextBody('Dear Policyholder, your claim is approved.');
emails.add(mail);
}
insert payments; // 1 DML regardless of batch size
Messaging.sendEmail(emails); // 1 email call
}
}Performance considerations:
- Avoid recursion — use a static Boolean flag (
private static Boolean hasRun = false;) if trigger can fire multiple times in the same transaction.
- Use
@futureor Platform Events if downstream logic risks exceeding CPU time (10s per transaction).
- For very large volumes (10,000+ records), delegate to
QueueableorBatchableApex.
Governor limits to highlight:
| Limit | Per Transaction Cap |
|---|---|
| SOQL queries | 100 |
| DML statements | 150 |
| DML rows | 10,000 |
| Email invocations | 10 |
| Email messages per call | 500 (single org), 5,000/day |
What a Strong Candidate Must Mention
- Bulkification — no SOQL or DML inside
forloops, ever.
- Trigger Handler pattern — separation of concerns between trigger and logic class.
Trigger.newvsTrigger.oldMap— correctly filtering only records transitioning toApproved.
- Static flag or
TriggerContext— preventing infinite recursion in recursive trigger scenarios.
- Async escalation path — knowing when to move to
QueueableorBatchApex.
Smart Follow-Up Questions
- "The claim batch now contains 12,000 records. The single
insert paymentscall will throw a DML rows limit error. How would you redesign this?"
- "How would you unit test this trigger handler to ensure 100% code coverage without relying on real email delivery?"
- "If a downstream system needs to consume these new Payment records in real time, would you use a trigger or a Platform Event? Why?"
Question 2 — LWC Performance & Data Binding at Scale
Domain: Lightning Web Components (LWC) · Salesforce Performance Optimization
Difficulty: Hard
Expected Interview Time: 15–20 minutes
Scenario
Your team has built a Lightning Web Component that renders a data table showing 500+ Opportunity records on an Account detail page. Users are reporting that the page freezes for 3–5 seconds on load, the table is unresponsive during scroll, and search filtering re-renders the entire list every keystroke.
You are asked to audit the component and propose a complete performance overhaul.
Model Answer
Step-by-step reasoning:
- Diagnose rendering bottleneck — 500+ DOM nodes rendered in a single
template for:eachis the primary cause of paint lag.
- Replace
for:eachtable withlightning-datatable— uses virtual scrolling natively, only rendering visible rows.
- Debounce the search input — prevent
wireor Apex calls on every keystroke usingsetTimeout.
- Server-side filtering — move search filtering to Apex rather than filtering a full client-side array.
- Lazy loading / pagination — limit initial data fetch to 50 records; load more on scroll or page action.
- Cache Apex responses — use
@wirewithcacheable=truewhere data is not mutation-sensitive.
LWC Implementation (debounced search + server-side filter):
// opportunityList.js
import { LightningElement, api, wire, track } from 'lwc';
import getOpportunities from '@salesforce/apex/OpportunityController.getOpportunities';
const DEBOUNCE_DELAY = 300;
export default class OpportunityList extends LightningElement {
@api recordId; // Account Id from record page
@track opportunities = [];
@track isLoading = false;
searchTerm = '';
debounceTimer;
columns = [
{ label: 'Name', fieldName: 'Name' },
{ label: 'Stage', fieldName: 'StageName' },
{ label: 'Close Date', fieldName: 'CloseDate', type: 'date' },
{ label: 'Amount', fieldName: 'Amount', type: 'currency' }
];
connectedCallback() {
this.loadOpportunities();
}
handleSearchChange(event) {
const value = event.target.value;
clearTimeout(this.debounceTimer);
this.debounceTimer = setTimeout(() => {
this.searchTerm = value;
this.loadOpportunities();
}, DEBOUNCE_DELAY);
}
async loadOpportunities() {
this.isLoading = true;
try {
this.opportunities = await getOpportunities({
accountId: this.recordId,
searchTerm: this.searchTerm,
pageSize: 50
});
} catch (error) {
console.error('Failed to load opportunities', error);
} finally {
this.isLoading = false;
}
}
}<!-- opportunityList.html -->
<template>
<lightning-card title="Opportunities">
<div slot="actions">
<lightning-input
type="search"
label="Search"
variant="label-hidden"
onchange={handleSearchChange}>
</lightning-input>
</div>
<template if:true={isLoading}>
<lightning-spinner alternative-text="Loading"></lightning-spinner>
</template>
<lightning-datatable
key-field="Id"
data={opportunities}
columns={columns}
hide-checkbox-column>
</lightning-datatable>
</lightning-card>
</template>// OpportunityController.cls
public with sharing class OpportunityController {
@AuraEnabled(cacheable=true)
public static List<Opportunity> getOpportunities(
Id accountId,
String searchTerm,
Integer pageSize
) {
String searchFilter = '%' + String.escapeSingleQuotes(searchTerm) + '%';
return [
SELECT Id, Name, StageName, CloseDate, Amount
FROM Opportunity
WHERE AccountId = :accountId
AND (Name LIKE :searchFilter OR StageName LIKE :searchFilter)
ORDER BY CloseDate DESC
LIMIT :pageSize
];
}
}Performance considerations:
cacheable=trueenables client-side LDS caching; userefreshApexto bust the cache after mutations.
lightning-datatablevirtual DOM rendering is significantly faster than manualtemplate for:eachfor 50+ rows.
- Moving filter logic to SOQL reduces JS heap pressure and avoids filtering 500-item arrays in the browser.
- For pagination beyond 50 records, implement
OFFSETor uselightning-datatableonloadmoreevent.
What a Strong Candidate Must Mention
lightning-datatablevs customfor:each— built-in virtual scrolling is the correct tool for large datasets.
- Debouncing — essential for search inputs; candidates should know a 300ms delay is standard.
- Server-side vs client-side filtering — client-side filtering large arrays is an antipattern.
cacheable=trueimplications — cannot be used on methods that perform DML or have side effects.
@trackdecorator awareness — understanding when LWC reactivity does and does not require@track.
Smart Follow-Up Questions
- "How would you test this LWC component using Jest? What would you mock?"
- "If the Apex controller hits the SOQL 50,000 row limit due to heavy filtering on a large org, how would you architect the data retrieval differently?"
- "How does LDS (Lightning Data Service) differ from a direct
@AuraEnabledApex call, and when would you prefer one over the other?"
Question 3 — REST API Integration with External System
Domain: REST/SOAP API Integrations · Apex Programming · Security
Difficulty: Medium–Hard
Expected Interview Time: 12–15 minutes
Scenario
Your organisation needs to sync Order data from an external ERP system into Salesforce every hour. The ERP exposes a REST API secured with OAuth 2.0 Client Credentials. The integration must:
- Retrieve orders created in the last hour from the ERP
- Upsert corresponding
Orderrecords in Salesforce using an external ID field (ERP_Order_ID__c)
- Handle HTTP errors and retry transient failures
- Not store the ERP client secret in Apex code
Model Answer
Step-by-step reasoning:
- Store credentials securely using Salesforce Named Credentials — never hardcode secrets in Apex.
- Obtain OAuth token via the Named Credential (handles token refresh automatically when configured with Protocol = OAuth 2.0).
- Make callout in a
Schedulable+Queueablepattern —Schedulabletriggers every hour,Queueableperforms the actual callout (callouts cannot run inBatchable'sexecutecontext without workarounds).
- Parse the response using a structured Apex wrapper class and
JSON.deserialize.
- Upsert with external ID to avoid duplicate creation.
- Error handling — check
HttpResponse.getStatusCode(), log failures to a customIntegration_Log__cobject.
Salesforce-specific implementation:
// Named Credential: ERP_System (configured in Setup → Named Credentials)
// Protocol: OAuth 2.0 Client Credentials, Auth endpoint, client ID/secret stored there
public class ERPOrderSync implements Schedulable {
public void execute(SchedulableContext sc) {
System.enqueueJob(new ERPOrderSyncQueueable());
}
}
public class ERPOrderSyncQueueable implements Queueable, Database.AllowsCallouts {
public void execute(QueueableContext ctx) {
String since = DateTime.now().addHours(-1)
.formatGmt('yyyy-MM-dd\'T\'HH:mm:ss\'Z\'');
HttpRequest req = new HttpRequest();
req.setEndpoint('callout:ERP_System/api/orders?created_since=' + since);
req.setMethod('GET');
req.setHeader('Accept', 'application/json');
req.setTimeout(20000);
HttpResponse res = new Http().send(req);
if (res.getStatusCode() == 200) {
processOrders(res.getBody());
} else {
logError('ERP Order Sync', res.getStatusCode(), res.getBody());
}
}
private void processOrders(String jsonBody) {
List<ERPOrderWrapper> erpOrders =
(List<ERPOrderWrapper>) JSON.deserialize(
jsonBody, List<ERPOrderWrapper>.class);
List<Order> ordersToUpsert = new List<Order>();
for (ERPOrderWrapper erpOrder : erpOrders) {
ordersToUpsert.add(new Order(
ERP_Order_ID__c = erpOrder.orderId,
Name = erpOrder.orderName,
Status = 'Draft',
EffectiveDate = Date.valueOf(erpOrder.orderDate),
AccountId = resolveAccount(erpOrder.accountRef)
));
}
Database.UpsertResult[] results =
Database.upsert(ordersToUpsert, Order.ERP_Order_ID__c, false);
for (Database.UpsertResult r : results) {
if (!r.isSuccess()) {
logError('Upsert failure', 0,
r.getErrors()[0].getMessage());
}
}
}
public class ERPOrderWrapper {
public String orderId;
public String orderName;
public String orderDate;
public String accountRef;
}
private Id resolveAccount(String externalRef) {
List<Account> accs = [
SELECT Id FROM Account
WHERE ERP_Account_ID__c = :externalRef LIMIT 1
];
return accs.isEmpty() ? null : accs[0].Id;
}
private void logError(String context, Integer code, String msg) {
insert new Integration_Log__c(
Context__c = context,
Status_Code__c = code,
Error_Message__c = msg
);
}
}Performance and security considerations:
- Named Credentials handle token refresh transparently — never store secrets in Custom Settings or hardcode them.
Database.upsert(..., false)uses partial-save mode — failures on individual records don't roll back the entire batch.
resolveAccountinside a loop is a SOQL-in-loop antipattern in production — should be refactored to a single Map lookup.
- Use Remote Site Settings or Named Credentials to whitelist the ERP endpoint.
What a Strong Candidate Must Mention
- Named Credentials — the only acceptable way to manage external secrets in Salesforce Apex.
Queueable+Database.AllowsCallouts— callouts must be in async context;Schedulablecannot make callouts directly.
Database.upsertwith external ID — idempotent, prevents duplicate records.
- Error logging pattern — custom
Integration_Log__cor Platform Events for observability.
JSON.deserializewith wrapper class vsJSON.deserializeUntyped— type-safe parsing is preferred.
Smart Follow-Up Questions
- "The ERP API occasionally returns a
503 Service Unavailable. How would you implement retry logic with exponential backoff in Apex?"
- "How would you handle the scenario where the ERP sends 2,000 orders in a single response, potentially exceeding Apex heap limits?"
- "How would you write an Apex test for this class without making a real HTTP callout?"
Question 4 — Security, Sharing Model & Data Access Design
Domain: Security & Access Control · Data Modelling
Difficulty: Medium
Expected Interview Time: 10–12 minutes
Scenario
You are onboarding a new Financial Services client onto Salesforce. They have the following access requirements:
- Sales Reps can only see Accounts and Opportunities they own.
- Sales Managers can see all records owned by their direct reports.
- Finance Team must see all Opportunity records org-wide, but must not be able to edit them.
- A specific VP needs temporary access to a sensitive Opportunity record not owned by her or her team.
- Sensitive financial fields (
Annual_Revenue__c,Discount_Percentage__c) must be hidden from Sales Reps.
Design the complete security architecture for this requirement.
Model Answer
Step-by-step reasoning:
- Set OWD (Organisation-Wide Defaults) for Account and Opportunity to
Private— this is the baseline; you can only open access upward from OWD, never restrict it.
- Use the Role Hierarchy to grant Sales Managers visibility into their reports' records automatically.
- Grant Finance Team read-only access to all Opportunities via a Criteria-Based Sharing Rule: share all Opportunity records with the Finance Team public group, access level =
Read Only.
- Grant the VP temporary access using Manual Sharing on the specific record.
- Use Field-Level Security (FLS) on the Profile or a Permission Set to hide
Annual_Revenue__candDiscount_Percentage__cfrom Sales Rep profiles.
Salesforce Security Architecture Mapping:
| Requirement | Salesforce Mechanism |
|---|---|
| Sales Reps see own records only | OWD = Private |
| Managers see reports' records | Role Hierarchy (Grant Access Using Hierarchies = ✓) |
| Finance reads all Opps | Criteria-Based Sharing Rule → Finance Public Group |
| VP temp access to 1 record | Manual Sharing (record-level) |
| Hide financial fields from Reps | FLS via Profile / Permission Set |
Apex FLS enforcement (production-grade Apex):
// Always respect FLS in Apex — use WITH SECURITY_ENFORCED
List<Opportunity> opps = [
SELECT Id, Name, Amount, Annual_Revenue__c
FROM Opportunity
WHERE AccountId = :acctId
WITH SECURITY_ENFORCED // Throws exception if FLS blocked
];
// OR use Schema describe for conditional field access
Schema.DescribeFieldResult dfr =
Opportunity.Annual_Revenue__c.getDescribe();
if (dfr.isAccessible()) {
// safe to include field in query
}Common Pitfalls a senior candidate should flag:
WITH SECURITY_ENFORCEDthrows aSystem.QueryExceptionif any field is inaccessible — use cautiously with try/catch or preferSecurity.stripInaccessible()for more granular control.
- Manual Sharing is lost if the record's OwnerId changes — document this limitation to the client.
- Sharing Rules are not evaluated in system-mode Apex (
without sharing) — make sure trigger handlers and service classes declarewith sharing.
What a Strong Candidate Must Mention
- OWD as the most restrictive baseline — you never use OWD to open access, only to close it; Sharing Rules open it back up.
- Profiles vs Permission Sets — best practice is minimum-access profile + granular Permission Sets (Profiles are being deprecated in newer orgs).
WITH SECURITY_ENFORCEDvsSecurity.stripInaccessible()— knowing the tradeoffs of each FLS enforcement strategy.
with sharingkeyword on Apex classes — Apex runs in system mode by default; sharing rules are only enforced whenwith sharingis declared.
- Role Hierarchy ≠ Sharing Rules — hierarchy grants implicit access; sharing rules are explicit and more auditable.
Smart Follow-Up Questions
- "If a Sales Rep runs an Apex class that queries Opportunities
without sharing, they can see records they shouldn't. How would you detect and prevent this in a code review process?"
- "The Finance Team now needs to edit Opportunities with Stage =
Closed Wononly. How does this change your sharing design?"
- "What is the difference between
Profiles,Permission Sets, andPermission Set Groups? When would you use each?"
Question 5 — Salesforce DX, CI/CD & Deployment Strategy
Domain: Deployment & CI/CD · Salesforce DX · Git
Difficulty: Medium
Expected Interview Time: 10–12 minutes
Scenario
Your team of 6 developers is working on a large Salesforce implementation with a dev, QA, UAT, and production sandbox pipeline. Developers are constantly overwriting each other's metadata changes. Deployments are manual and error-prone, taking 4–6 hours. You've been asked to design and implement a CI/CD pipeline from scratch using Salesforce DX and Git.
Model Answer
Step-by-step reasoning:
- Migrate to a source-tracked scratch org model using
sfdx force:org:create— each developer works in their own scratch org, eliminating conflicts on shared sandboxes.
- Use a trunk-based Git branching strategy:
main= production-ready state
release/uat= UAT-gated branch
feature/JIRA-XXX-description= individual developer branches
- Implement GitHub Actions (or Bitbucket Pipelines / GitLab CI) to automate:
- On PR to
develop: validate deployment to QA sandbox (no destructive deploy, validation only)
- On merge to
develop: deploy to QA
- On merge to
release/uat: deploy to UAT sandbox + run Apex test suite
- On merge to
main: run a full-checkonlyvalidation + manual approval gate + deploy to production
- On PR to
- Use delta deployments with tools like
sfdx-git-delta(SGD) — only deploy changed metadata instead of the full org metadata.
- Enforce code coverage gate — block merges if Apex code coverage drops below 75%.
Sample GitHub Actions pipeline step (PR validation):
# .github/workflows/validate-pr.yml
name: Validate PR to Develop
on:
pull_request:
branches: [develop]
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0 # Required for delta comparison
- name: Install SF CLI
run: npm install -g @salesforce/cli
- name: Authenticate to QA Sandbox
run: |
echo "${{ secrets.QA_SFDX_AUTH_URL }}" > auth.txt
sf org login sfdx-url --sfdx-url-file auth.txt \
--alias qa-sandbox --set-default
- name: Generate Delta Package (sfdx-git-delta)
run: |
npm install sfdx-git-delta --global
mkdir -p deploy
sgd --to HEAD --from origin/develop \
--repo . --output deploy/
- name: Validate Deployment (Check Only)
run: |
sf project deploy validate \
--manifest deploy/package/package.xml \
--target-org qa-sandbox \
--test-level RunLocalTests \
--wait 30
- name: Check Coverage Threshold
run: |
# Parse test results and fail if coverage < 75%
sf apex run test \
--target-org qa-sandbox \
--code-coverage \
--result-format json \
--output-dir test-resultsKey Salesforce DX commands used in a mature pipeline:
# Create scratch org from project definition
sf org create scratch \
--definition-file config/project-scratch-def.json \
--alias my-feature-org --duration-days 7
# Push source to scratch org
sf project deploy start --target-org my-feature-org
# Pull changes back from scratch org after declarative config
sf project retrieve start --target-org my-feature-org
# Run all local Apex tests
sf apex run test --target-org my-feature-org \
--test-level RunLocalTests --code-coverage
# Destructive changes (removing metadata)
sf project deploy start \
--manifest package.xml \
--post-destructive-changes destructiveChangesPost.xmlPerformance considerations for the pipeline:
- Delta deployments (
sfdx-git-delta) reduce deploy time from hours to minutes by limiting scope to changed components.
- Parallel test execution can be enabled with
-conciseflag but increases sandbox CPU load.
- Store
SFDX_AUTH_URLvalues as encrypted GitHub Secrets — never commit.sfdxurlfiles to the repository.
What a Strong Candidate Must Mention
- Scratch orgs — isolated, version-controlled developer environments that eliminate shared sandbox conflicts.
sfdx-git-delta— delta deployments are essential for pipeline speed; deploying full metadata is an antipattern.
-checkonlyvalidation before production push — mandatory for regulated orgs; prevents untested deploys.
- Secrets management —
SFDX_AUTH_URLstored as encrypted CI/CD secrets, never in code.
- Destructive changes — awareness of
destructiveChanges.xmlfor metadata removal, often forgotten by juniors.
Smart Follow-Up Questions
- "Two developers both modified the same Flow in their scratch orgs. How do you resolve this metadata conflict at merge time, and what tooling would you use?"
- "Your production deployment failed mid-way through due to a code coverage failure. The org is now in a partially deployed state. What steps do you take?"
- "How would you automate the seeding of test data in a scratch org during pipeline setup, and what are the security considerations?"
Question 6 — Platform Event-Driven Architecture for Decoupled Integrations
Domain: Apex Programming, REST/SOAP API Integrations | Difficulty: Hard | Time: 25 min
Scenario
A global logistics company uses Salesforce as their CRM. When a Shipment__c record reaches Delivered status, four downstream systems must be notified: a billing system (REST), a warehouse management system (REST), a customer notification microservice (REST), and an internal analytics pipeline (Kafka via REST proxy). Previously, all four callouts were chained synchronously inside an Apex trigger. The system has become brittle — if one callout times out, the entire transaction rolls back and the Shipment__c status never saves. You've been asked to re-architect this using Platform Events.
Model Answer
Step-by-Step Reasoning
- Identify the core problem — Callouts in trigger context cannot survive partial failure. A single 30s timeout on one of four endpoints cascades into a full transaction rollback. The delivery status update fails for the user entirely.
- Platform Events as the decoupling layer — The trigger publishes one
Shipment_Delivered__ePlatform Event. Four independent subscribers consume it asynchronously. The trigger transaction completes immediately — publishing is fire-and-forget from the publisher's perspective.
- Publisher design — The trigger publishes the event synchronously but the event itself is delivered asynchronously. If the trigger transaction rolls back before committing, the event is also rolled back — this is correct behaviour (no phantom events for unsaved records).
- Subscriber design — Each downstream integration subscribes via a separate
@AuraEnabledtrigger on the Platform Event object, or via an Apex Trigger onShipment_Delivered__e. Each subscriber handles its own retry logic independently.
- ReplayId and durability — Platform Events are stored for 72 hours. Subscribers that go offline can resubscribe with a
ReplayIdto replay missed events — critical for the Kafka analytics pipeline which may have maintenance windows.
- High-volume considerations — Platform Events support up to 250,000 event messages per 24 hours (Enterprise). For burst scenarios, use event bus throughput limits awareness and consider CometD long polling vs. Streaming API subscription management.
Architecture Diagram (Text)
Shipment__c Trigger (after update)
│
│ Publishes: Shipment_Delivered__e
│ { ShipmentId, CustomerId, DeliveredAt, TrackingNumber }
▼
[ Platform Event Bus ]
│
┌────┴──────────────────────────────────────┐
▼ ▼ ▼ ▼
Billing Warehouse Notification Analytics
Subscriber Subscriber Subscriber Subscriber
(Apex) (Apex) (Apex) (Apex)
│ │ │ │
REST call REST call REST call Kafka REST
to Billing to WMS to Notif. SVC ProxyApex Snippet — Publisher
// ShipmentTriggerHandler.cls
public class ShipmentTriggerHandler {
public static void handleAfterUpdate(
List<Shipment__c> newList,
Map<Id, Shipment__c> oldMap
) {
List<Shipment_Delivered__e> events = new List<Shipment_Delivered__e>();
for (Shipment__c s : newList) {
if (s.Status__c == 'Delivered'
&& oldMap.get(s.Id).Status__c != 'Delivered') {
events.add(new Shipment_Delivered__e(
Shipment_Id__c = s.Id,
Customer_Id__c = s.Account__c,
Tracking_Number__c = s.Tracking_Number__c,
Delivered_At__c = System.now()
));
}
}
if (!events.isEmpty()) {
List<Database.SaveResult> results = EventBus.publish(events);
for (Database.SaveResult sr : results) {
if (!sr.isSuccess()) {
// Log publish failure — do not throw, allow Shipment save
System.debug('Event publish failed: ' + sr.getErrors());
}
}
}
}
}Apex Snippet — One Subscriber (Billing)
// BillingIntegrationSubscriber.cls
trigger ShipmentDeliveredBillingTrigger on Shipment_Delivered__e (after insert) {
List<Shipment_Delivered__e> events = Trigger.new;
for (Shipment_Delivered__e evt : events) {
try {
HttpRequest req = new HttpRequest();
req.setEndpoint('callout:Billing_API/invoices/trigger');
req.setMethod('POST');
req.setHeader('Content-Type', 'application/json');
req.setTimeout(15000);
req.setBody(JSON.serialize(new Map<String, Object>{
'shipmentId' => evt.Shipment_Id__c,
'customerId' => evt.Customer_Id__c,
'deliveredAt' => evt.Delivered_At__c
}));
HttpResponse res = new Http().send(req);
if (res.getStatusCode() != 200) {
// EventBus.RetryableException triggers automatic replay
throw new EventBus.RetryableException(
'Billing API returned: ' + res.getStatusCode()
);
}
} catch (CalloutException e) {
throw new EventBus.RetryableException('Callout failed: ' + e.getMessage());
}
}
}Governor Limit & Platform Awareness
| Concern | Detail |
|---|---|
| Event publish limit | 250k events/24hrs (Enterprise) — monitor via Event Monitoring |
| Platform Event trigger limits | Same Apex limits apply per trigger execution |
RetryableException | Causes automatic event replay — use deliberately, not for all errors |
| CometD subscriber sessions | Max 20 concurrent streaming API clients per org |
| 72-hour replay window | Missed events recoverable via ReplayId — document this SLA for ops |
What a Strong Candidate Must Mention
- Transaction atomicity — event is rolled back if the publisher transaction fails (no phantom events)
EventBus.RetryableException— mechanism for automatic retry without custom retry logic
- ReplayId for durability — subscribers can replay missed events up to 72 hours
- Independent failure isolation — one downstream failure does not affect the other three
- Named Credentials per subscriber — each integration uses its own credential, not a shared one
Follow-Up Questions
- "Change Data Capture (CDC) also uses the Streaming API. When would you choose CDC over a custom Platform Event, and what are the schema and volume tradeoffs?"
- "If the Kafka REST proxy subscriber needs to process events in strict order (FIFO), how does the Platform Event model support or complicate that requirement?"
- "How would you monitor event delivery health in production — what tooling and metrics would you set up?"
Question 7 — Apex Test Architecture and Code Coverage Strategy
Domain: Apex Programming | Difficulty: Medium | Time: 20 min
Scenario
A developer on your team opens a PR. The Apex class they've written has 92% code coverage — well above the 75% threshold. You do a quick code review and raise concerns about the quality of the tests despite the high coverage number. The developer pushes back, saying coverage is what Salesforce requires and the tests pass. Describe how you would explain the problem to the developer, and then demonstrate what a high-quality Apex test for a complex trigger handler looks like.
Model Answer
The Coverage Trap — Explaining to the Developer
High code coverage is a necessary condition for deployment, not a sufficient condition for quality. Common patterns that produce high coverage with low quality:
System.assert(true)or no assertions at all
- Tests that call the method but never verify side effects on records
@IsTest(SeeAllData=true)relying on real org data — non-deterministic and breaks in clean sandboxes
- Testing only the happy path — no bulk, no governor limits, no error paths, no edge cases
- Static test data that doesn't represent realistic volume
A test suite should verify behaviour, not just execution.
What a High-Quality Test Looks Like
The test must cover:
- Single record — basic assertion on expected field changes
- Bulk (200 records) — assert no governor limit exceptions
- Edge case — record where status does NOT change (no-op path)
- Negative path — what happens when required related data is missing
- Assertions on DML side effects — re-query the database after the operation
@IsTest
private class ClaimTriggerHandlerTest {
// ── Test Data Factory ────────────────────────────────────────────────────
@TestSetup
static void setup() {
// Create parent Policy records
List<Policy__c> policies = new List<Policy__c>();
for (Integer i = 0; i < 5; i++) {
policies.add(new Policy__c(Name = 'Policy-' + i, Open_Claims__c = 10));
}
insert policies;
// Create Claim records attached to policies
List<Claim__c> claims = new List<Claim__c>();
for (Policy__c p : policies) {
for (Integer j = 0; j < 40; j++) {
claims.add(new Claim__c(
Policy__c = p.Id,
Status__c = 'Open',
OwnerId = UserInfo.getUserId()
));
}
}
insert claims; // 200 total — max bulk trigger size
}
// ── Test 1: Single record happy path ─────────────────────────────────────
@IsTest
static void testSingleClaimClosure_updatesRollup() {
Claim__c claim = [SELECT Id, Policy__c FROM Claim__c LIMIT 1];
Test.startTest();
claim.Status__c = 'Closed';
update claim;
Test.stopTest();
// Assert DML side effect — re-query after transaction
Policy__c updated = [SELECT Open_Claims__c FROM Policy__c WHERE Id = :claim.Policy__c];
System.assertEquals(39, updated.Open_Claims__c,
'Closing 1 of 40 open claims should leave 39 open');
}
// ── Test 2: Bulk (200 records) — governor limit proof ───────────────────
@IsTest
static void testBulkClosure_noGovernorViolation() {
List<Claim__c> allClaims = [SELECT Id FROM Claim__c];
System.assertEquals(200, allClaims.size(), 'Setup should have created 200 claims');
for (Claim__c c : allClaims) {
c.Status__c = 'Closed';
}
Test.startTest();
update allClaims; // Will throw LimitException if handler is not bulkified
Test.stopTest();
// Assert all policies now have 0 open claims
for (Policy__c p : [SELECT Open_Claims__c FROM Policy__c]) {
System.assertEquals(0, p.Open_Claims__c,
'All claims closed — policy should have 0 open claims');
}
}
// ── Test 3: No-op — status unchanged, no DML on Policy ──────────────────
@IsTest
static void testNoStatusChange_doesNotUpdatePolicy() {
Claim__c claim = [SELECT Id, Policy__c FROM Claim__c LIMIT 1];
Policy__c before = [SELECT Open_Claims__c FROM Policy__c WHERE Id = :claim.Policy__c];
Test.startTest();
// Update a field that is NOT Status__c
claim.Description__c = 'Updated description';
update claim;
Test.stopTest();
Policy__c after = [SELECT Open_Claims__c FROM Policy__c WHERE Id = :claim.Policy__c];
System.assertEquals(before.Open_Claims__c, after.Open_Claims__c,
'Non-status update should not recalculate rollup');
}
// ── Test 4: Already-closed claim re-saved (idempotency) ─────────────────
@IsTest
static void testAlreadyClosedClaim_isIdempotent() {
Claim__c claim = [SELECT Id, Status__c FROM Claim__c LIMIT 1];
claim.Status__c = 'Closed';
update claim;
Policy__c afterFirst = [SELECT Open_Claims__c FROM Policy__c WHERE Id IN
(SELECT Policy__c FROM Claim__c WHERE Id = :claim.Id)];
Test.startTest();
// Save the already-closed claim again — should not double-deduct
claim.Description__c = 'Re-save of closed claim';
update claim;
Test.stopTest();
Policy__c afterSecond = [SELECT Open_Claims__c FROM Policy__c WHERE Id IN
(SELECT Policy__c FROM Claim__c WHERE Id = :claim.Id)];
System.assertEquals(afterFirst.Open_Claims__c, afterSecond.Open_Claims__c,
'Saving an already-closed claim should not change the rollup again');
}
}What Makes This Test Suite High Quality
| Property | How It's Achieved |
|---|---|
No SeeAllData=true | @TestSetup creates all data deterministically |
| Meaningful assertions | Every test re-queries and asserts specific field values |
| Bulk coverage | 200-record test proves bulkification |
| Behaviour verification | Tests verify what changed in the database, not just that code ran |
No System.assert(true) | All assertions have descriptive failure messages |
What a Strong Candidate Must Mention
@TestSetupfor shared, deterministic test data — neverSeeAllData=true
- Re-querying after
Test.stopTest()— assertions on persisted state, not in-memory objects
- Bulk test with exactly 200 records — the standard maximum DML batch size
- Testing the no-op path — handler should not perform unnecessary DML
- Descriptive assertion messages — makes CI failure output actionable
Follow-Up Questions
- "How do you test a method that makes a callout to an external API? Walk me through
HttpCalloutMockand whyTest.setMock()must be called beforeTest.startTest()."
- "You need to test an Apex class that sends emails. How do you assert the email was sent without actually sending it in a test context?"
- "Your test class takes 4 minutes to run in CI, blocking the pipeline. What strategies would you use to diagnose and reduce test execution time?"
Question 8 — Optimizing SOQL for a Reporting Use Case with 10M+ Records
Domain: Salesforce Performance Optimization | Difficulty: Hard | Time: 20–25 min
Scenario
A customer service manager runs a daily Salesforce report that lists all Case records created in the last 90 days, filtered by Account.Industry, Status, and Priority, with a custom formula field Response_SLA_Met__c used in the WHERE clause. The report takes 4–5 minutes to run and sometimes times out entirely. The org has 12 million Case records. You've been asked to diagnose and fix the performance.
Model Answer
Root Cause Diagnosis
Three likely causes, in order of severity:
- Formula field in WHERE clause —
Response_SLA_Met__cis a formula field. Formula fields are not indexable in Salesforce. Every query filtering on a formula field performs a full table scan against all 12M records.
- Cross-object filter on
Account.Industry— Filtering on a parent field via a relationship (Account.Industry) does not use the Case object's index. Salesforce must join across objects, which is expensive at this volume.
- Non-selective filter combination —
StatusandPrioritylikely have low cardinality (few unique values, e.g., 5 statuses × 3 priorities). These fields are not selective enough alone to avoid a full table scan even if indexed.
Solution Strategy
1. Replace the formula field filter
Materialise Response_SLA_Met__c as a stored checkbox field (Response_SLA_Met_Stored__c) updated via a trigger or Flow when the SLA calculation changes. Stored fields are indexable.
// In CaseTriggerHandler — materialise the formula result
for (Case c : newList) {
c.Response_SLA_Met_Stored__c = (c.First_Response_Time__c <= c.SLA_Deadline__c);
}2. Add a custom index on the stored field
For standard objects, submit a Salesforce Support ticket to add a custom index on Response_SLA_Met_Stored__c and CreatedDate (if not already indexed). CreatedDate is a standard indexed field but combining it with other filters requires a composite index request.
3. Move Account.Industry to a stamped field on Case
Denormalise Account.Industry into Case.Account_Industry__c using a trigger or Flow that stamps the value at case creation and on Account update. This enables direct indexing on the Case object without cross-object joins.
// Stamp industry at Case insert
trigger CaseTrigger on Case (before insert, before update) {
Set<Id> accountIds = new Set<Id>();
for (Case c : Trigger.new) {
if (c.AccountId != null) accountIds.add(c.AccountId);
}
Map<Id, Account> accounts = new Map<Id, Account>(
[SELECT Id, Industry FROM Account WHERE Id IN :accountIds]
);
for (Case c : Trigger.new) {
if (accounts.containsKey(c.AccountId)) {
c.Account_Industry__c = accounts.get(c.AccountId).Industry;
}
}
}4. Leverage SOQL Query Plan Tool
Before and after changes, use the Query Plan Tool in Developer Console to inspect cost. A cost below 1 indicates selective query; above 1 means full table scan likely.
-- Before optimisation (problematic)
SELECT Id, CaseNumber, Status, Priority
FROM Case
WHERE CreatedDate = LAST_N_DAYS:90
AND Account.Industry = 'Technology' -- cross-object, not indexable on Case
AND Response_SLA_Met__c = false -- formula, never indexable
-- After optimisation
SELECT Id, CaseNumber, Status, Priority
FROM Case
WHERE CreatedDate = LAST_N_DAYS:90
AND Account_Industry__c = 'Technology' -- stamped, indexable
AND Response_SLA_Met_Stored__c = false -- materialised, indexable5. Consider a Custom Report Type with Bucketing
If the report is primarily for aggregation (counts, percentages), replace it with a Summary Report with Bucketing on CreatedDate (by month/week). This dramatically reduces the rendered row count while preserving analytical value.
Selective vs. Non-Selective Filters
| Filter Type | Selective? | Notes |
|---|---|---|
CreatedDate range | ✅ Yes (standard index) | Very effective at 12M records |
Id / OwnerId | ✅ Yes | Always indexed |
| Formula fields | ❌ Never | Force full table scan |
| Cross-object fields | ❌ No (on child object) | Index lives on parent |
| Low-cardinality picklists | ⚠️ Rarely | < 10% selectivity threshold |
| Custom indexed fields | ✅ Yes (if requested) | Submit to Salesforce Support |
What a Strong Candidate Must Mention
- Formula fields are never indexable — always materialise to a stored field for query use
- Query Plan Tool in Developer Console — the primary diagnostic instrument
- Selectivity threshold — Salesforce uses an index only if the filter returns < 10% of records
- Denormalisation as a performance pattern — stamping parent fields onto child records
- Custom index requests — some fields require a Salesforce support case to index
Follow-Up Questions
- "You've materialised the formula field into a stored checkbox. Now, 3 months later, a developer accidentally deploys a change that removes the trigger keeping it in sync. How would you detect and remediate data drift at scale?"
- "The org also runs nightly batch jobs that re-query these 12M Cases. How does
LIMITandOFFSETbehave at this scale, and what cursor-based alternative would you recommend?"
- "Salesforce offers Big Objects for archiving high-volume historical data. When would you recommend migrating old Case records to a Big Object, and what query capabilities do you lose?"
Question 9 — Multi-Org Architecture: Connected App, OAuth, and Canvas
Domain: Security & Access Control, REST/SOAP API Integrations | Difficulty: Hard | Time: 20 min
Scenario
Your company operates two Salesforce orgs: a production CRM org (Sales Cloud) and a separate Field Service org (Field Service Lightning). A Field Service agent needs to view a specific Visualforce page from the CRM org inside the Field Service org's UI, without re-authenticating. Additionally, a third-party mobile app built by your in-house team needs to authenticate to the CRM org on behalf of individual users. Describe how you would design the authentication and SSO architecture for both scenarios.
Model Answer
Scenario A: Cross-Org SSO via Canvas
Canvas allows external web apps and other Salesforce orgs to be embedded inside a Salesforce UI. To embed a CRM org Visualforce page in the FSL org:
- In the CRM org — Create a Connected App with:
- OAuth scopes:
api,refresh_token,full(or narrowest applicable scope)
- Canvas app URL: the Visualforce page URL in the CRM org
- Access Method:
Signed Request (POST)— the CRM org receives a signed JWT from the FSL org
- Permitted Users: Admin-approved or Self-Authorized
- OAuth scopes:
- In the FSL org — Surface the Canvas app via a Visualforce page, App Builder component, or App Launcher tile. When a user opens it, the FSL org sends a signed HMAC-SHA256 request to the Canvas URL.
- Token exchange flow:
- FSL org signs a Canvas request using the Connected App's Consumer Secret
- CRM org validates the signature and extracts the user context
- The embedded page renders in the FSL org's iframe with the CRM user's session
- Security consideration — The Canvas signed request contains the OAuth token for the user in the CRM org. This token must never be logged or exposed. Use
Apex.Canvas.SignedRequestparsing only server-side.
Scenario B: Mobile App OAuth (User-Agent / PKCE Flow)
For a native mobile app authenticating on behalf of users, use the OAuth 2.0 Authorization Code + PKCE flow (not the legacy User-Agent flow which is deprecated for mobile).
Mobile App
│
│ 1. Redirect to:
│ https://{org}.salesforce.com/services/oauth2/authorize
│ ?response_type=code
│ &client_id={consumer_key}
│ &redirect_uri=myapp://oauth/callback
│ &code_challenge={PKCE_challenge}
│ &code_challenge_method=S256
▼
Salesforce Login Page (user authenticates with MFA if enabled)
│
│ 2. Returns authorization_code to redirect_uri
▼
Mobile App
│
│ 3. POST to /services/oauth2/token
│ { grant_type: authorization_code, code: ..., code_verifier: ... }
▼
Salesforce returns: access_token + refresh_token
│
│ 4. Mobile app stores refresh_token securely (device keychain)
│ Uses access_token for REST API calls
│ Refreshes using: grant_type=refresh_tokenConnected App Configuration:
- OAuth Scopes:
api,refresh_token,offline_access
- Callback URL: custom URI scheme (
myapp://oauth/callback) registered in app store
- Token Validity: set refresh token policy to "Immediately expire refresh token on next use" for high-security orgs, or "Refresh token is valid until revoked" for better UX
- IP Relaxation: "Relax IP restrictions" for mobile (users are not on corporate IP ranges)
Permission Set / Profile for the Connected App:
- Create a dedicated Permission Set
Mobile_App_Accessgranted to mobile app users
- Restrict which objects the mobile app can access using the Connected App's OAuth policy
- Use
Named Principalfor service-to-service;Per Userfor user-delegated access
Security Hardening
Connected App Security Checklist:
✅ PKCE required for all mobile/SPA clients — no client_secret in mobile app
✅ Admin pre-authorization — prevent unapproved self-authorization
✅ IP allowlisting on Connected App for server-to-server integrations
✅ Minimum viable OAuth scopes — never grant 'full' unless necessary
✅ Token expiry policies reviewed — default access token = 2 hours
✅ Certificate-based auth for server integrations instead of client secret
✅ Monitor via Event Monitoring: LoginEvent, ConnectedAppEventWhat a Strong Candidate Must Mention
- PKCE over implicit/user-agent flow for mobile — security best practice, Salesforce deprecating legacy flows
- Canvas signed request — HMAC-SHA256 validation, token is embedded in signed payload
- Minimum viable OAuth scopes — principle of least privilege applied to Connected Apps
- Refresh token storage — device keychain, not localStorage or app state
- Event Monitoring for OAuth anomaly detection —
LoginEvent,ConnectedAppEvent
Follow-Up Questions
- "A user leaves the company. Walk me through every place their OAuth access must be revoked — Connected App sessions, refresh tokens, Named Credentials, and any external system tokens."
- "Your mobile app uses the refresh token to get a new access token, but Salesforce returns
invalid_grant. What are the three most likely causes and how do you diagnose each?"
- "How would you implement Single Logout (SLO) so that when a user logs out of the FSL org, their Canvas session in the CRM org is also terminated?"
Question 10 — Designing for Regulated Industries: Field Encryption, Audit Trails & Compliance
Domain: Security & Access Control | Difficulty: Hard | Time: 25 min
Scenario
A healthcare company is implementing Salesforce Health Cloud to manage patient outreach campaigns. The Salesforce implementation must comply with HIPAA. Patient records on a custom Patient__c object contain Protected Health Information (PHI): Date_of_Birth__c, Diagnosis_Code__c, SSN__c, and Insurance_ID__c. The company's security officer has the following requirements:
- PHI fields must be encrypted at rest
- All access to PHI fields must be logged
- The field history for PHI fields must be retained for 7 years
- Only specific roles (Clinical Staff, Compliance Officers) may view decrypted values
- Developers must never have access to production PHI
Design the complete data protection architecture.
Model Answer
1. Encryption at Rest — Shield Platform Encryption
Use Salesforce Shield Platform Encryption (not Classic Encryption) for HIPAA-grade field-level encryption.
Shield vs Classic Encryption:
| Capability | Classic Encryption | Shield Encryption |
|---|---|---|
| Field types supported | Custom text fields only | Text, Email, Phone, Date, Number + more |
| SOQL filtering on encrypted | ❌ No | ✅ Yes (deterministic encryption) |
| Key management | Salesforce-managed | Customer-managed (Bring Your Own Key) |
| Standard object fields | ❌ No | ✅ Yes (with some limits) |
| Files/Attachments | ❌ No | ✅ Yes |
Implementation steps:
- Enable Shield Platform Encryption in Setup
- Generate a tenant secret — this is the customer-controlled entropy component
- Configure Bring Your Own Key (BYOK) if required by the compliance team (key stored in HSM outside Salesforce)
- Encrypt specific fields:
Patient__c.SSN__c— use deterministic encryption (allows exact-match SOQL filtering)
Patient__c.Diagnosis_Code__c— use probabilistic encryption (stronger, no filtering)
Patient__c.Date_of_Birth__c— deterministic (range queries needed for age calculations)
Patient__c.Insurance_ID__c— probabilistic
Setup → Platform Encryption → Encryption Policy
→ Patient__c.SSN__c → Encrypt [Deterministic]
→ Patient__c.Diagnosis_Code__c → Encrypt [Probabilistic]
→ Patient__c.Date_of_Birth__c → Encrypt [Deterministic]
→ Patient__c.Insurance_ID__c → Encrypt [Probabilistic]2. Access Logging — Shield Event Monitoring
Enable Shield Event Monitoring to capture a FieldHistoryEvent and ReportEvent log for PHI field access. Every read of an encrypted field is captured in the Event Log File API.
Key event types to capture:
FieldAuditTrail— stores historical field value changes (up to 10 years with the FieldAuditTrail add-on)
ReportEvent— logs every time a report exposing PHI is run, including who ran it and when
LoginEvent— captures login anomalies that may indicate credential theft
// Audit log access programmatically if additional business logic needed
public with sharing class PHIAccessLogger {
public static void logAccess(Id patientId, String fieldName, String userId) {
PHI_Access_Log__c log = new PHI_Access_Log__c(
Patient__c = patientId,
Field_Name__c = fieldName,
Accessed_By__c = userId,
Access_Time__c = System.now(),
Access_IP__c = Auth.SessionManagement.getCurrentSession()
?.get('SourceIp')
);
// Insert in a future method to avoid DML limits in reporting context
insert log;
}
}3. 7-Year Field History — Field Audit Trail Add-On
Standard Field History Tracking retains only 18 months. For HIPAA's 7-year retention requirement:
- Purchase the Field Audit Trail add-on — extends retention up to 10 years
- Configure via
Setup → Field Audit Trail → Define Retention Policy
- Set
Patient__cPHI fields to 2,555 days (7 years) retention
- This data is queryable via the
HistoryRetentionJobandEntityHistoryobjects
4. Role-Based PHI Access
OWD: Patient__c → Private
Profiles:
Clinical Staff Profile → Read/Edit Patient__c (sees decrypted values via Shield)
Compliance Officer Profile → Read Patient__c + Field Audit Trail access
Standard User Profile → Read Patient__c (encrypted values display as ****)
Developer Profile → NO access to Patient__c in production
Permission Sets:
"PHI_Viewer" → Grants 'View Encrypted Data' permission (Shield-specific)
→ Assigned only to Clinical Staff and Compliance Officers
Sharing Model:
→ Role hierarchy: Compliance Officer above Clinical Staff (upward visibility)
→ Sharing Rule: Share Patient__c with Compliance Officer role (criteria-based, Read)Critical Shield Permission: The View Encrypted Data system permission controls who sees decrypted vs masked values. Without it, a user with field read access sees **** instead of the actual value.
5. Developer Access Controls
Production Developer Access Policy:
✅ Developers have NO profiles with Patient__c access in production
✅ All development done in scratch orgs with synthetic/anonymised test data
✅ Data masking applied in sandbox refresh (Setup → Sandbox → Data Mask)
✅ Salesforce Shield Data Detect scans sandboxes for PHI that leaked from production
✅ Named Credentials and encryption keys managed by Security team — not dev team
✅ Separate deployment user (service account) for CI/CD — minimal permissionsFull Architecture Summary
Data Layer: Shield Encryption (BYOK) on 4 PHI fields
Access Control: PHI_Viewer Permission Set + View Encrypted Data permission
Audit Trail: Shield Event Monitoring + Field Audit Trail (10yr add-on)
Logging: PHI_Access_Log__c custom object for business-level audit
Dev Isolation: Scratch orgs with anonymised data + Sandbox Data Masking
Monitoring: Event Log File API → SIEM integration (Splunk/Datadog)What a Strong Candidate Must Mention
- Shield vs Classic Encryption distinction — deterministic vs probabilistic, standard objects support
View Encrypted Datasystem permission — without it, encrypted fields display as***
- Field Audit Trail add-on for >18 months retention — standard field history is insufficient for HIPAA
- Sandbox Data Masking — prevents PHI from appearing in non-production orgs
- BYOK key management — separation of duties between Salesforce and the encryption key custodian
Follow-Up Questions
- "A developer queries
Patient__cvia the Apex Metadata API in a DevOps pipeline to check field configuration. Does Shield encryption protect against this? What does and doesn't get encrypted at the metadata level?"
- "Your SIEM team asks you to stream all PHI field access events to Splunk in real-time. Walk through the architecture using Event Monitoring, the Event Log File API, and any middleware required."
- "A compliance audit requires you to prove that a specific patient's SSN was not accessed by anyone outside the Clinical Staff role in the last 6 months. Walk me through exactly which Salesforce tools you would use and what queries you'd run."