Methodology · How the Signal Engine calculates the 7 signals
The math behind the Signal Engine. And what we explicitly don't fake.
The Signal Engine is the reading layer behind InbXr. It ingests
DNS data, blacklist checks, ESP per-contact activity, and authentication
headers, and turns them into the 7 Inbox Signals you see in your dashboard.
Every deliverability tool has a calculation methodology. Most keep theirs
opaque. We publish ours in full, including the formulas, the data sources,
and the things we refuse to claim to measure because we can't honestly
measure them. If a competitor's score sounds too good to be true, check
whether they publish methodology at this level of detail.
Signal 01
Bounce Exposure
Weight: 25 pts of 100
What it measures
The percentage of contacts likely to bounce or be filtered on your next send,
combining historical hard-bounce flags with predictive flags for role addresses
(info@, admin@), disposable domains, and catch-all mailboxes.
Formula
score = 100 − (hard_bounce_rate × 8) − (predictive_risk × 2)
Clamped to 0-100. Predictive risk is derived from role/disposable/catch-all flags per contact.
Data source
contact_segments.is_hard_bounce, is_role_address, is_disposable, is_catch_all (from ESP sync or CSV upload).
Signal 02 · Market-first
Engagement Trajectory (MPP-adjusted)
Weight: 25 pts of 100
What it measures
Real human engagement over the last 30 days, after removing contacts flagged as
likely Apple Mail Privacy Protection machine openers. This is the signal most tools
get wrong. Almost all competitors just check if the recipient is on iCloud, which
catches roughly 15% of real MPP opens.
MPP detection (hybrid by ESP)
Mailgun: User-Agent string parsing + Apple IP range (17.0.0.0/8).
Confidence: HIGH.
Mailchimp, ActiveCampaign: timing heuristic (opens within 2s of
delivery flagged as machine). Confidence: MEDIUM.
AWeber and others: @icloud.com / @me.com / @mac.com domain fallback
only. Confidence: LOW.
Sticky flag: once a contact is flagged in any sync, the flag
persists. MPP is a per-install attribute, not per-send.
Formula
real_engagement_30d = (non_mpp_opens_30d + clicks_30d) / non_mpp_contacts
Score is a non-linear function of real_engagement_30d, giving credit to anything above 8% and strong credit above 25%.
Signal 03 · Market-first
Acquisition Quality
Weight: 15 pts of 100
What it measures
Day-1, day-7, and day-30 engagement rates per acquisition cohort. Tells you which
signup source produces contacts that actually engage vs which are quietly dragging
your blended numbers down.
Data source
contact_segments.acquisition_date grouped into weekly cohorts, joined with engagement timestamps. Requires per-contact acquisition_date from the ESP or CSV upload.
Signal 04
Domain Reputation
Weight: 15 pts of 100
What it measures
Sender-domain blocklist status (110+ RBLs checked every 6 hours) combined with
recipient provider distribution. High Yahoo/AOL concentration raises risk because
of tighter post-April-2024 filtering at those providers.
Signal 05
Spam Trap Exposure
Weight: 10 pts of 100
What it measures
Probabilistic spam trap risk based on the conditions that produce trap hits:
dormancy depth (180+ and 365+ day inactive cohorts), list age, and acquisition
source pattern. We score the probability of hitting a trap before you send,
not after.
Honest scope
We cannot confirm an individual address is a spam trap without hitting it. What
we can do is read the conditions that produce trap hits with high statistical
reliability. And score the segment most likely to carry that risk so you can
suppress them first. This is risk scoring, not trap identification.
Signal 06
Authentication Standing
Weight: 5 pts of 100 (gating)
What it measures
Compliance with the 2024-2025 Gmail, Yahoo, and Microsoft bulk sender requirements:
SPF valid, DKIM signed, DMARC policy (must be quarantine or reject, not none),
and one-click List-Unsubscribe headers. Weighted lightly because it's binary
gating. If broken, everything else is moot.
Reference
Scored against our isp_compliance_requirements table, which is seeded with the
published enforcement requirements from each provider (Gmail/Yahoo Feb 2024,
Microsoft May 2025).
Signal 07
Decay Velocity
Weight: 5 pts of 100
What it measures
Rate of change in your composite Signal Score. A list trending downward 3 points/week
is in worse shape than a list trending upward from a lower starting point.
Why we don't predict day counts
Some tools advertise "your list will hit danger in 23 days." That requires historical
training data on comparable senders which we don't yet have. Instead of making up a
number, we use trend language. "declining toward danger" is honest, "23 days" isn't.
Once we have 6+ months of cross-sender historical data, we may add proper predictions
with error bars disclosed.
What InbXr refuses to fake
Confirming individual addresses as spam traps. We cannot confirm a
specific address is a Spamhaus, Abusix, or Cloudmark trap without hitting it. What
Signal 05 does is read the upstream conditions that produce trap hits
(dormancy depth, list age, acquisition source pattern) and score the probability
against your list before you send. Risk scoring, not trap identification. And
we say so on Signal 05 above.
Day-count predictions without historical training data. See Signal 07
above. We use trend language only.
MPP adjustment based only on @icloud.com domain matching. That method
catches ~15% of real MPP opens. We use User-Agent, Apple IP range, timing heuristics,
and domain matching as a hybrid. With the confidence level disclosed per ESP.
Content-based "spam score" predictions. Content-based heuristics are
wildly unreliable in 2026. Inbox providers use ML models on hundreds of features.
We don't pretend a regex against your subject line tells you whether a campaign will
deliver. Instead we give you the 7 upstream signals that actually matter.
Questions about the methodology?
Reply to any InbXr email. Joe reads every response personally.
Try it on your list →