Thesis
Scrum is one of the most widely used software development frameworks (Digital.ai 18th State of Agile, 2025) and the canonical literature was almost entirely written before fully-distributed work was the default. The Scrum Guide 2020’s only locational guidance is “Optimally, all events are held at the same time and place to reduce complexity” (Scrum Guide 2020) — and it points the wrong direction for distributed teams.
Founders running distributed teams typically assume one of two false positions: either “Scrum doesn’t work for remote” (so they abandon it and lose the empirical control loop) or “we’ll just run Scrum over Zoom” (so they import sync-presuming events into an async-default culture).
The honest answer is more useful and more demanding: Scrum’s empirical core — Transparency, Inspection, Adaptation, plus the Sprint Goal, Definition of Done, and Product Owner accountability — is exactly what high-performing distributed companies run on. The prescribed sync events around that core are the pre-distributed prescription that needs adapting. The peer-reviewed evidence, the canonical Scrum bodies’ own remote-adaptation guidance, and the operating practices of GitLab, 37signals, Linear, and Doist all converge on this answer. The strongest single position for distributed-native scale: LeSS’s co-location rule is empirically refuted by the site’s own evidence base.
Symptoms
You need this framework if your distributed team is hitting these patterns:
1. The Daily Scrum has stretched past 15 minutes. Cucolaș & Russo (2023) found in a multi-method study (12 interviews + survey of 138 engineers) that remote standups routinely extended past the timebox “to enforce team communication and maintain transparency.” The 15-minute timebox in the Scrum Guide presumes ambient context that distributed teams do not have.
2. Retrospectives feel hollow. Same study: remote retros “lacked the human aspect,” forcing Scrum Masters to invent new engagement techniques. The retro is structurally the worst-fitting Scrum event in distributed contexts.
3. The minority time zone is paying for the daily sync. A 15-minute daily ceremony at 9am in the headquarters time zone is 1am in Tokyo or 3am in Sydney. The “same time and place” Scrum Guide guidance pushes the cost of synchrony onto the people farthest from headquarters.
4. Sprint Reviews have become demos to nobody. Stakeholders are async; the live demo runs to an empty room or to a recording nobody watches. The empirical control loop’s stakeholder-feedback step is the one most often skipped in distributed Scrum.
5. Velocity numbers are being managed instead of used. Story points are not in the Scrum Guide (Scrum Guide 2020) but are taught everywhere. In distributed teams, they often become a coordination theater that obscures the real planning question.
6. The Scrum Master role has no clear function. In a company with strong written norms, a handbook, and a Product Owner who is the DRI, the Scrum Master’s accountability for “establishing Scrum as defined in the Scrum Guide” (Scrum Guide 2020) is structurally ambiguous.
Root Causes
The Scrum Guide is silent on distribution and the bodies have not reconciled their own positions. The Scrum Guide 2020’s only locational sentence is “Optimally, all events are held at the same time and place to reduce complexity.” Scrum.org has published a multi-part Remote Agile series with per-event adaptations, and Scrum.org CEO Dave West has gone on the record that “remote work can be more productive than in-person work, but to be more effective you have to be mindful of the constraints (timezones, home environment, home bandwidth), tools… and the culture required for remote working” (Source). Scrum Alliance has published distributed-Scrum guidance including the Jesse Fewell argument that “what’s good for colocated Scrum teams is good for distributed Scrum teams.” The Guide and the bodies’ adaptation guidance disagree, and the canon has not been updated.
Scrum’s empirical core is location-agnostic; its events are not. Empiricism — “knowledge comes from experience and making decisions based on what is observed” (Scrum Guide 2020) — does not depend on synchrony or proximity. But the five events as prescribed (Sprint Planning, Daily Scrum, Sprint Review, Sprint Retrospective, plus the Sprint container) all assume same-time meeting structures. This is a design split inside the framework that distributed adopters inherit without noticing. Scrum.org’s own Sprint Review article makes the diagnostic stakes explicit: “Moving everything into the virtual realm is like putting the organization’s culture under a microscope: Remote Scrum reveals all shortcomings, problems, and issues at a much faster pace” (Source). That is exactly why the misfit between core and events shows up immediately when teams go distributed.
The largest empirical study of Scrum effectiveness did not isolate distribution as a variable. Verwijs & Russo (2023), surveying ≈2,000 Scrum teams and ≈5,000 developers, identified five empirically validated effectiveness factors — responsiveness, stakeholder concern, continuous improvement, team autonomy, management support — but did not split distributed and co-located teams. The dominant evidence base for “Scrum works” is not the dominant evidence base for “Scrum works distributed.”
LeSS made co-location a rule before the distributed-scale evidence existed. The LeSS rules state that co-location “has always been a part of the LeSS rules” and define it as a team that “sits together at the same table, and the other teams located in the same physical location are close (hearing distance) to them.” Bas Vodde’s 2019 post reinforces the position: “each individual team must be co-located” and “a typical co-located team still performs significantly better.” This was published before GitLab’s 2,500+ all-remote scale was visible and before Automattic’s sustained ~1,500-person all-remote operation was a contemporary data point. The LeSS rule is now empirically out of step with what the strongest distributed-native companies have proven.
What Companies Get Wrong
Importing Scrum events unchanged into an async-default culture. Running an orthodox 15-minute Daily Scrum over Zoom in a company that otherwise communicates async creates two operating systems that disagree about defaults. Either the standup taxes the minority time zone or it degrades into a written check-in — at which point it is no longer “the” Daily Scrum and the framework’s vocabulary stops being load-bearing. Scrum.org’s own framing acknowledges the stakes: “in the case of Scrum, the impact of remote work is amplified in the same ways that Scrum amplifies value and challenges” (Source).
Treating async standups as “the same thing, slower.” GitLab’s communication handbook makes a stronger point: async is a different coordination model with its own structure, not a degraded version of synchronous. A written daily check-in with a 24-hour SLA on blockers is doing different work than a 15-minute standup. Pretending otherwise leads to bad versions of both.
Adopting LeSS unchanged at distributed scale. The LeSS co-location rule is an explicit organizational design choice. Distributed-native companies that adopt LeSS without revisiting that rule import a constraint that contradicts their operating model. The site’s evidence is direct primary refutation: GitLab and Automattic operate the LeSS-equivalent scale (multiple feature teams on a single product) entirely without team co-location.
Treating velocity as planning instead of instrumentation. Story points and velocity are not in the Scrum Guide (Scrum Guide 2020) but are taught universally by Scrum.org, Scrum Alliance, and Scrum Inc. trainers. The disciplined distributed planners on this site — 37signals with appetite-based scoping (Source) and Linear with cycle-based scoping — explicitly reject estimates as the planning unit. There is no peer-reviewed evidence that velocity improves distributed-team outcomes.
Skipping the relationship-capital prerequisite for retros. Cucolaș & Russo (2023) found remote retros “lacked the human aspect.” This is consistent with the retreats practice on this site: relationship capital is what makes async candor possible. A distributed retro without prior investment in offline trust is structurally unable to surface what an in-person retro can.
What Works Instead
The keep / modify / drop guide for distributed teams. Keyed to (a) the Scrum Guide 2020, (b) Scrum.org’s own published Remote Agile guidance, (c) the Magalhães et al. 2025 Delphi consensus of nine senior Scrum Masters, (d) the company evidence below.
Keep orthodox:
- Empiricism (Transparency, Inspection, Adaptation). This is the load-bearing part. Every distributed company on this site runs a version of T-I-A through written artifacts.
- Sprint Goal as a single-sentence commitment. Stronger than “appetite” alone because it commits to a behavioral outcome.
- Definition of Done as the quality contract. Linear’s binary ship/no-ship is a DoD by another name.
- Product Owner accountability for value. The PO is the original DRI (decision-ownership practice).
- Increment + frequent release. Verwijs & Russo (2023) found release frequency was the strongest predictor of team effectiveness across ≈2,000 teams.
Modify:
- Daily Scrum → daily inspection, async-default. Scrum.org’s published remote-Daily-Scrum guidance defends the 15-minute sync timebox but concedes its purpose: “the Daily Scrum’s 15-minute timebox is not intended to solve all the issues addressed during the event. It is about creating transparency, thus triggering the inspection” (Source). The site’s distributed-native evidence and the Cucolaș & Russo (2023) finding that remote standups extend past the timebox suggest a different adaptation that preserves the same purpose: replace the sync standup with a written daily check-in (the 37signals automatic daily check-in pattern, Source) and surface blockers async with a 24-hour SLA. Move to sync only when async exchanges hit GitLab’s three-exchange threshold (Source) or when the Sprint Goal is at risk. This is the highest-leverage adaptation and the one Scrum.org’s published Daily Scrum guidance does not yet endorse.
- Sprint Planning → pre-planning async. Scrum.org’s own guidance endorses this directly: “By comparison to a co-located Scrum Team, the preparatory work, for example, regular Product Backlog refinements, keeping track of technical debt, and preserving a high-quality Definition of ‘Done’ is paramount to securing a successful outcome of the Sprint” (Source). Backlog refinement and Sprint Goal candidate authored before any sync session. The sync session is for disambiguation only. Cap at 4 hours for a 2-week sprint, half what the Scrum Guide allows.
- Sprint Review → async demo + targeted live feedback. Recorded walkthrough plus written notes for the broad stakeholder set. Reserve live sessions for stakeholders whose feedback genuinely benefits from simultaneity. Scrum.org reframes the Review’s purpose in distributed contexts as “Empiricism at work: inspect the Product Increment and adapt the Product Backlog” (Source) — the empirical control function, not the demo, is what must be preserved.
- Sprint Retrospective → async retro doc plus relationship-focused sync. Scrum.org’s published Part 5 takes the opposite position: “there is no reason whatsoever to deviate from this guideline just because we are working in a remote setting” (Source). The site’s evidence and the Cucolaș & Russo (2023) finding that remote retros “lacked the human aspect” point a different way: open the retro doc 48 hours before any sync session, use the live session for relationship work and pattern-naming, not idea generation. Pair with an annual or semi-annual retreat (retreats practice) — distributed retros sit on top of trust capital that has to be built separately.
- Sprint length. Default to 1–2 weeks for fully-distributed teams. Inference: this matches Linear’s published cycle length and the site’s planning-cycles evidence; the empirical literature on optimal sprint length is small and explicitly excludes distributed contexts (Anand et al. 2021 is representative — it models sprint length without addressing distributed teams).
Drop or replace:
- The “same time and place” guidance. Replace with: same artifact, asynchronously authored, sync only by escalation.
- Velocity / story points as a planning commitment. Keep them as instrumentation if useful. Replace appetite-style scoping (37signals, Linear cycles) as the primary unit.
- LeSS’s team co-location rule. The site’s distributed-scale evidence directly refutes it. Where LeSS is otherwise adopted, this rule is the one to drop first.
- Inference: the dedicated full-time Scrum Master role is structurally redundant in distributed companies under ~50 people that already have a maintained handbook, written communication norms, and a clear PO/DRI. Collapse the accountability into engineering management. The Scrum bodies will not endorse this; the site’s evidence and the de Souza Santos et al. (2023) “no-impact” meta-analysis finding support it as a credible adaptation.
Company Evidence
37signals / Basecamp — Runs the strongest non-Scrum cousin: 6-week Shape Up cycles plus 2-week cooldowns, appetite over estimates, no Scrum events, no SM role. Their published position is that real-time work should be infrequent: “there will be times when you do need to tightly collaborate with someone in real time, but those cases should be infrequent” (Source). What they preserve from Scrum’s empirical core: a named accountable owner per project, fixed cadence forcing prioritization, written artifacts (Heartbeats and Kickoffs at the cycle boundary, daily auto check-ins for inspection) as the inspection mechanism. What they drop: every prescribed Scrum event, plus estimation as a planning unit. (Source: Shape Up)
GitLab — All-remote at 2,500+ on a multi-horizon cadence (week / month / quarter / year / 3-year) governed by the handbook (Source). Structurally similar to Scrum@Scale’s nested PO Cycle and Scrum Master Cycle, but built on async-default communication. The handbook is the transparency artifact that the Scrum Guide assumes will be a co-located room. The single largest counter-evidence to LeSS’s co-location rule.
Linear — 1–2 week cycles with built-in cooldowns, appetite-based scoping, “Create momentum — don’t sprint” as a published principle (Source). Closest analog to a distributed-adapted Scrum: short cadence, binary quality gate (their version of DoD), single-DRI per project, no daily standup. Funded $82M Series C in June 2025 (Source).
Doist — ~93 people, async-first for 15+ years, no meetings by default. Runs a “heroes vs. housekeeping” structure within cycles to prevent maintenance load from corrupting feature work (Source). The Scrum events are not present; the empirical control loop is — through written project threads in their own product (Twist).
Automattic — ~1,500 people post-April-2025 restructuring, all-remote on the P2 blog system. Continuous deployment, division-level OKRs. Operates at LeSS-equivalent scale (multiple teams on one product surface) without team co-location, providing the longest-running primary refutation of the LeSS rule (Source).
Founder Actions
1. Audit which Scrum events your distributed team is actually getting value from. For one cycle, score each event against its Scrum Guide purpose: was the Sprint Goal clearer after Sprint Planning? Was progress inspected and adapted in the Daily Scrum? Did stakeholders give actionable feedback in the Sprint Review? If an event is theater — held because the framework says to — replace it with the async equivalent.
2. Move the Daily Scrum async and protect the Sprint Goal in writing. Replace the 15-minute live ceremony with a written daily check-in (the 37signals automatic daily check-in pattern). Surface the Sprint Goal at the top of the check-in every day. Reserve sync for the GitLab three-exchange threshold or for Sprint-Goal-at-risk situations.
3. Pre-plan asynchronously, then run a shorter sync session. Backlog refinement and the Sprint Goal candidate get authored async before any meeting. The live Sprint Planning is a 2–4 hour disambiguation session, not an 8-hour generation session.
4. Pair every retro with retreat-built relationship capital. A distributed retro without offline trust capital is structurally unable to surface candor. Schedule retreats once or twice annually before the year starts (retreats practice). Without this, the retrospective will continue to “lack the human aspect” no matter how the live session is facilitated.
5. Replace velocity with appetite as the primary planning unit. Velocity stays as a metric the team sees but does not commit to. Appetite (“this problem is worth two weeks”) becomes the unit at the betting table.
6. If you are scaling Scrum, choose Scrum@Scale over LeSS for distributed-native companies. Scrum@Scale is silent on team location, so it ports cleanly. LeSS makes co-location a rule that contradicts your operating model. The site’s evidence is direct primary refutation of that rule at distributed scale.
Sources
Canonical Scrum:
- The Scrum Guide 2020 (Schwaber & Sutherland): https://scrumguides.org/scrum-guide.html
- Scrum.org — Remote Work and Scrum (Dave West, June 20, 2022): https://www.scrum.org/resources/blog/remote-work-and-scrum
- Scrum.org — Remote Agile (Part 5): The Remote Retrospective with a Distributed Team (Stefan Wolpers, April 14, 2020): https://www.scrum.org/resources/blog/remote-agile-part-5-remote-retrospective-distributed-team
- Scrum.org — Remote Agile (Part 6): Sprint Planning with Distributed Teams (Stefan Wolpers, April 19, 2020): https://www.scrum.org/resources/blog/remote-agile-part-6-sprint-planning-distributed-teams
- Scrum.org — Remote Agile (Part 7): Sprint Review with Distributed Teams (Stefan Wolpers, April 27, 2020): https://www.scrum.org/resources/blog/remote-agile-part-7-sprint-review-distributed-teams
- Scrum.org — Remote Agile (Part 8): Daily Scrum with Distributed Teams (Stefan Wolpers, May 4, 2020): https://www.scrum.org/resources/blog/remote-agile-part-8-daily-scrum-distributed-teams
- Scrum Alliance — How Agile Teams Overcome Obstacles Created by Distance (Jesse Fewell): https://resources.scrumalliance.org/Article/agile-teams-overcome-obstacles-created-distance
- Scrum@Scale Guide (Sutherland / Scrum Inc.): https://www.scrumatscale.com/scrum-at-scale-guide-online/
- LeSS Principles: https://less.works/less/principles
- LeSS — Notes on co-location and work from home (rules page): https://less.works/less/rules/colocation
- LeSS — Co-location still matters (Vodde, 2019): https://less.works/blog/2019/12/20/colocation-still-matters.html
Empirical:
- de Souza Santos, Ralph, Arshad, Stol-J (2023). Distributed Scrum: A Case Meta-Analysis. ACM Computing Surveys: https://dl.acm.org/doi/10.1145/3626519
- Cucolaș & Russo (2023). The impact of working from home on the success of Scrum projects. Journal of Systems and Software: https://pmc.ncbi.nlm.nih.gov/articles/PMC9684095/
- S&P Global action research (2025), Information and Software Technology: https://www.sciencedirect.com/science/article/pii/S0950584925000679
- Magalhães et al. (2025). A Delphi Study on the Adaptation of SCRUM Practices to Remote Work: https://arxiv.org/html/2503.21960
- Verwijs & Russo (2023). A Theory of Scrum Team Effectiveness. ACM TOSEM: https://dl.acm.org/doi/10.1145/3571849
- Digital.ai 18th Annual State of Agile Report (2025): https://digital.ai/press-releases/digital-ais-18th-state-of-agile-report-marks-the-start-of-the-fourth-wave-of-software-delivery/
- Anand, Kaur, Singh, Alhazmi (2021). Optimal Sprint Length Determination for Agile-Based Software Development. Computers, Materials & Continua, 68(3): https://www.techscience.com/cmc/v68n3/42509
Distributed-company operating evidence (cross-referenced from site profiles):
- 37signals — How We Work: https://basecamp.com/handbook/how-we-work
- 37signals — Shape Up: https://basecamp.com/shapeup
- GitLab — Cadence: https://handbook.gitlab.com/handbook/company/cadence/
- GitLab — Asynchronous communication: https://handbook.gitlab.com/handbook/company/culture/all-remote/asynchronous
- Linear — The Linear Method: https://linear.app/method
- Doist — How Doist Works Remote: https://doist.com/how-we-work/how-doist-works-remote
- Automattic — Expectations: https://automattic.com/expectations/
Inferences
- The Scrum Guide’s silence on distribution is structural, not accidental. Empiricism is location-agnostic; the prescribed events presume proximity. The Scrum bodies have published per-event remote adaptations without updating the canon, so the framework is in practice two documents (the Guide plus the unwritten remote-adaptation consensus) that have not been reconciled. Distributed teams adopting Scrum inherit this unreconciled state and have to resolve it locally.
- Scrum@Scale ports more cleanly to distributed scale than LeSS, because Scrum@Scale is silent on team location while LeSS makes co-location a rule. The site’s GitLab and Automattic evidence is the strongest contemporary refutation of the LeSS co-location rule at scale. This is the one place where the distributed-native operating evidence directly contradicts a named, canonical scaling-framework rule.
- The dedicated full-time Scrum Master role is structurally redundant in distributed companies under ~50 people with a maintained handbook, written communication norms, and a clear PO/DRI. The de Souza Santos et al. (2023) meta-analysis finding of “no impact” on project success is consistent with this: the framework’s role overhead appears roughly equal to its role benefit. The Scrum bodies will resist this conclusion; the operating evidence supports it.
- A research-grade prediction worth testing: a fully-distributed-native company running orthodox Scrum will score lower on Verwijs & Russo’s “team autonomy” effectiveness factor than the same company running async-adapted Scrum. The Verwijs & Russo dataset would let someone test this if it were re-coded for distributed vs. co-located teams.
Work with Alex
If you are running Scrum in a distributed company and the events feel like overhead while the empirical control loop is what you actually need, Alex helps leadership teams adapt Scrum to async-default operating models — keeping the empiricism, replacing the sync-presuming events, and refusing the parts of the canon that are out of date.