Tutorials

Docker Pull Stuck or Pending? VPN and DNS Troubleshooting Steps for CLI (2026)

A hung docker pull rarely prints whether you are stuck resolving registry-1.docker.io, negotiating TLS through a filtered path, or downloading multi‑gigabyte layers across an overloaded VPN tunnel. This walkthrough splits the problem into observable phases so you can decide whether to fix resolver configuration, adjust routing and split tunnel policy, or change registry endpoints—with reproducible commands instead of forum folklore.


On this page

  1. What Docker Hub pulls actually do (where they stall)
  2. Separating DNS failures from HTTPS reachability
  3. CLI diagnostics you can run in two minutes
  4. How VPN changes the daemon path on Linux vs Desktop VMs
  5. Split tunnel patterns for registry traffic
  6. MTU oddities, mirrors, and corporate proxies
  7. Why disciplined routing beats toggling VPN blindly

What Docker Hub pulls actually do (where they stall)

Modern docker pull orchestration is a pipeline, not a single TCP session. The engine asks the Hub API for an image manifest, discovers blob digests, then retrieves layer blobs—often from geographically distributed object storage fronted by CDNs. A spinner that says “Pulling fs layer” can therefore mask manifest latency, per‑layer retries, or DNS lookups for freshly introduced hostnames that appear only after metadata resolves.

Authentication adds another timing cliff when credential helpers mis‑serialize registry domains—silent backoff loops resemble networking freezes unless you correlate timestamps across daemon logs. Manifest schemas (application/vnd.docker.distribution.manifest.v2+json versus multi‑architecture indexes) change how many round‑trips precede first blob attempts; debugging pulls exclusively through spinner wording wastes cycles compared with tagging concrete milestones (“manifest fetched”, “layer digest verified”).

Knowing the phase matters because DNS fixes do nothing when TLS handshakes die halfway through a tunnel, and enlarging MTU does not repair NXDOMAIN responses from an upstream resolver that refuses docker.io zones under corporate policy. Treat symptoms like a profiler: first prove names resolve and endpoints answer HTTP 401 Unauthorized from the registry (which still proves reachability), then chase throughput.

Observing concurrency—the Hub daemon multiplexes parallel layer downloads—helps differentiate bandwidth contention from single‑socket deadlock scenarios exaggerated once QUIC‑friendly intermediaries reorder pacing unexpectedly beyond naive spinner percentages alone.

If you recently switched transports—for example from ISP UDP to a VPN profile favoring TCP fallback—revisit how each protocol behaves under packet loss. Background on WireGuard versus OpenVPN trade-offs applies directly to why daemon pulls feel snappy on one profile and glacial on another; see VPN protocols compared: WireGuard, OpenVPN, and proprietary transports once networking—not Docker—is clearly implicated.

Separating DNS failures from HTTPS reachability

DNS trouble presents as instant timeouts during manifest fetch: the CLI waits on libc resolver behavior while systemd‑resolved, corporate forwarding rules, or split‑DNS inside VPN profiles reorder search domains incorrectly. Path trouble presents as stalled percentages on individual layers even though earlier manifests succeeded—often congestion, packet loss through GRE overlays, or middleboxes stripping ECN bits rather than misconfigured search paths.

IPv6 versus IPv4 divergence illustrates why naive diagnoses fail: a resolver might advertise AAAA answers happily yet ISP‑grade breakage prefers IPv6 first‑candidate probing until exponential backoff expires—surfacing as intermittent hangs unrelated to Docker versioning. Capture concise resolver timelines (systemd-resolve --status, macOS scutil outputs, Windows ipconfig /all) when troubleshooting persists overnight.

Concrete separation trick: resolve registry labels independently of Docker first. If registry-1.docker.io resolves quickly yet layer downloads crawl, your bottleneck moved downstream to CDN routing or bandwidth—not resolver recursion depth. Conversely, if digests never appear because DNS lookup queues wedge behind an unreachable DoT forwarder, fix resolver addresses before touching Docker daemon JSON flags.

VPN clients frequently inject their own DNS servers while simultaneously forcing tunnel‑only forwarding; some kernels briefly wedge until DHCP renew completes after reconnect storms. Document baseline resolver IPs before connecting so diffing becomes trivial rather than anecdotal.

CLI diagnostics you can run in two minutes

These checks deliberately avoid Docker internals until the underlying network path behaves:

  • Name resolution: Query registry-1.docker.io with your platform resolver tool (dig, drill, Resolve-DnsName). Confirm TTL sane answers—not NXDOMAIN from filtered upstreams silently intercepted.
  • HTTPS handshake visibility: Run verbose TLS probes such as curl -Iv https://registry-1.docker.io/v2/ expecting HTTP status lines consistent with an anonymous registry (often 401 with WWW-Authenticate). Missing certificates or stalled ClientHello implicates filters before Docker logs ever clarify.
  • Alternate egress test: Repeat probes tethered to mobile hotspot versus VPN attached LAN to localize ISP blocking versus profile‑specific routing.
  • Docker daemon logs: Inspect daemon journal entries during stall windows—Moby prints retries differently when auth helpers mis-fire versus transport resets.

Traceroutes deserve restrained interpretation because CDN edges multiplex aggressively—timeouts midway rarely pinpoint faulty hops—but noticing abrupt truncation exclusively while VPN engages signals tunnel encapsulation limits sooner than speculative sysctl tweaks.

Linux workstations pair naturally with container workflows; aligning VPN installation steps once avoids chasing phantom Docker bugs later—mirror Ubuntu VPN install & first-time setup order-of-operations if your tunnel stack shares that distribution profile.

How VPN changes the daemon path on Linux vs Desktop VMs

On native Linux, the Docker daemon generally shares the host routing table unless you operate rootless networking overlays or bespoke bridges. Enabling a full‑tunnel VPN often redirects daemon egress through the tunnel adapter automatically—which fixes censorship‑shaped paths yet may reroute blob downloads across continents unintentionally.

Firewall orchestration layers (nftables, ufw‑linked iptables chains, cloud‑init hardened profiles) occasionally reorder marks differently once tunnel interfaces appear—drops manifest as hung retries despite ICMP unreachable suppression masking explanations unless counters increment visibly.

Docker Desktop on Windows and macOS wraps engines inside lightweight VMs; VPN placement doubles complexity because packets might traverse host adapters twice depending on whether split integrations expose host DNS into the VM. Symptoms include fast browser downloads on the host yet stalled pulls inside containers—the hypervisor path diverges.

WSL2 users observing asymmetric throughput should reconcile virtual switch bridging versus mirrored NAT hops whenever VPN adapters expose differing offload capabilities.

Hypothesis testing stays empirical: compare curl timings executed inside an ephemeral privileged helper container (docker run --rm curlimages/curl …) versus identical curls from host shells while VPN toggles. Divergence proves routing asymmetry rather than Hub outages.

Always normalize timezone stamps across correlated captures—daemon journals rendered UTC mismatched against VPN syslog excerpts invites phantom causal leaps dismissed trivially once clocks reconcile honestly.

Split tunnel patterns for registry traffic

Split tunneling—routing only selected prefixes through VPN while letting bulk CDN downloads egress locally—is frequently the pragmatic compromise for developers whose employers mandate tunnel inspection yet penalize multi‑gigabyte artifact pulls across constrained exits. The mechanics resemble routing presets documented for desktop VPN apps; conceptually align Hub domains or CDN prefixes either inside or outside the tunnel deliberately rather than inheriting silent defaults.

For workstation setups already juggling selective routing rules on Windows, reuse mental models from Windows VPN split tunneling setup when Docker Desktop shares kernel adapters with interactive browsers—consistent naming beats improvising per‑adapter metric tweaks nightly.

Policy reminder: Employer mandates may forbid bypassing mandatory inspection tunnels—confirm acceptable routing patterns before carving Docker traffic away from compliance tooling.

Inverse splits (tunnel only confidential SaaS) versus VPN‑first baselines each invert failure signatures; note which baseline your IT department standardized before blaming Docker.

Document resulting hostname lists alongside firewall approvals—future Hub updates that introduce unfamiliar CDN labels reproduce fewer surprises after quarterly rotations.

MTU oddities, mirrors, and corporate proxies

Encrypted tunnels frequently shrink effective MTU; oversized TCP segments black‑hole during blob pulls despite manifest JSON succeeding instantly—classic symptom progression worth noting before rewriting daemon configs.

Where jitter dominates latency budgets (tc netem labs replicate WAN ugliness), prefer pacing-aware transports tuned conservatively rather than chasing unrealistic concurrency knobs exposed inside Compose scaffolding aimed at LAN‑speed CI runners.

Registry mirrors and pull‑through caches remain legitimate mitigation when geographic latency dominates—Docker publishes mirror recipes under official documentation you can adapt without sourcing binaries elsewhere. Configure daemon mirror endpoints deliberately so QA hosts mirror production expectations rather than inheriting developer laptops randomly.

Corporate HTTP proxies require explicit daemon proxy environment wiring (often systemd drop‑ins or Docker Desktop proxy panels); forgetting HTTPS_PROXY parity yields manifests succeeding via direct CONNECT attempts blocked asymmetrically.

Rolling credential rotations upstream simultaneously rarely manifests cleanly inside Compose wrappers lacking exponential backoff telemetry unless daemon verbosity increments proactively beforehand.

Credential scopes spanning mirror URLs occasionally mismatch IAM prefixes baked into hardened pipelines—standardizing canonical registry spelling upstream avoids phantom unauthorized retries surfaced mid‑flight downloads.

Why disciplined routing beats toggling VPN blindly

Randomly disconnecting VPN restores pulls yet leaks unfinished workloads onto hostile café Wi‑Fi—exactly the opposite of why engineers adopted tunnels for CI artifact retrieval in airports. Ad hoc toggling also poisons reproducibility: teammates debugging flaky pipelines inherit mystical timing dependence rather than documented egress profiles.

Compared with duct‑tape fixes, clients that expose observable routing tables, stable DNS overrides, and documented split defaults shrink Docker‑specific outages dramatically—even though Docker Hub ultimately operates independently from any single VPN vendor.

Teams scaling reproducible builds should codify these probes inside onboarding docs so interns inherit diagnostics—not mystical tribal knowledge about “that toggle near lunchtime.”

DVDVPN focuses on predictable tunnel plumbing across desktop platforms so you can articulate whether registry paths belong inside or outside encrypted overlays without rebuilding firewall chains manually each sprint. New registrations receive starter quota without requiring card verification upfront—ideal for rehearsing mirror strategies inside disposable VMs—while installers remain centralized on our download page. Manage subscriptions and pooled usage from one account dashboard once pulls stabilize beyond hobby experiments.

Docker CLI · DNS · routing

Isolate resolver lag before rewriting daemon JSON

Use reproducible TLS probes and split routing presets instead of blind VPN toggles.

Create free account Download clients