Overview
Git "RPC failed" errors usually occur during networked operations (push, clone, fetch) when the HTTP/HTTPS transport closes unexpectedly, times out, or chokes on large packfiles. Common messages:
- RPC failed; curl 18 transfer closed with outstanding data
- RPC failed; curl 28 operation timed out
- RPC failed; curl 55/56 failure sending/receiving data
- HTTP/2 stream was not closed cleanly
- The remote end hung up unexpectedly
This guide shows practical diagnostics and fixes you can apply quickly.
Quickstart (most common fixes)
- Turn on verbose tracing to see where it fails.
export GIT_TRACE=1
export GIT_CURL_VERBOSE=1
# Optional: deeper packet tracing
export GIT_TRACE_PACKET=1
- If you see HTTP/2 or stream reset errors, force HTTP/1.1 and retry.
git config --global http.version HTTP/1.1
git push # or: git fetch / git clone
- If the push/clone is large and slow, extend low-speed timeout and reduce concurrency.
git config --global http.lowSpeedLimit 1 # bytes/sec threshold
git config --global http.lowSpeedTime 600 # seconds before aborting
git config --global http.maxRequests 2 # fewer parallel HTTP requests
- Pushing big files? Move them to Git LFS and push again.
# Install LFS (once)
git lfs install
# Track large binary patterns
git lfs track "*.zip" "*.bin"
# Commit updated .gitattributes
git add .gitattributes && git commit -m "Use LFS for large binaries"
# Re-add large files so they go to LFS
git add path/to/large.bin && git commit -m "Move large file to LFS"
# Push
git push
- If your repo is huge, fetch/clone with less data.
# Shallow clone the latest commit
git clone --depth 1 https://example.com/repo.git
# Or partial clone without blobs
git clone --filter=blob:none https://example.com/repo.git
Minimal working example: apply safe transport fallbacks and retry
# 1) Enable trace to observe the failure
export GIT_TRACE=1 GIT_CURL_VERBOSE=1
# 2) Apply transport workarounds
# - Use HTTP/1.1 to avoid flaky HTTP/2 intermediaries
# - Be tolerant of slow networks and reduce concurrency
git config --global http.version HTTP/1.1
git config --global http.lowSpeedLimit 1
git config --global http.lowSpeedTime 600
git config --global http.maxRequests 2
# 3) Improve local pack performance to avoid timeouts during push
# Lower compression to speed pack writing (trade bandwidth for time)
git config --local core.compression 1
# Limit pack threads on constrained machines (or omit to auto-detect)
git config --local pack.threads 2
# 4) Retry your operation
# Example: push current branch
git push -v
If this succeeds, make the settings permanent as needed (global/local).
Step-by-step diagnosis
- Identify transport and symptom
- HTTPS with HTTP/2 errors: “stream reset”, “not closed cleanly” → likely proxy/load balancer issue.
- Timeouts (curl 28): slow link or large pack creation.
- Unexpected disconnect during push: oversized objects, server limits, or idle timeouts.
- Capture evidence
- Use GIT_TRACE, GIT_CURL_VERBOSE, and GIT_TRACE_PACKET.
- Note where it fails: during POST (push) or GET (fetch/clone).
- Network checks
- If behind a corporate proxy/TLS inspection, test from a network without it.
- Configure proxy explicitly if required:
git config --global http.proxy http://user:pass@proxy:3128
# For HTTPS if different
git config --global https.proxy http://user:pass@proxy:3128
- HTTP/2 workarounds
- Force HTTP/1.1:
git config --global http.version HTTP/1.1
- Timeouts and slow links
- Be more tolerant of low speeds and long packs:
git config --global http.lowSpeedLimit 1
git config --global http.lowSpeedTime 600
- Reduce data transferred
- Shallow/partial operations:
# Fetch fewer commits
git fetch --depth 50 origin main
# Partial clone for future on-demand blob fetches
git clone --filter=blob:none https://example.com/repo.git
- Handle large files properly
- Use Git LFS for binaries >50–100 MB; avoid committing archives/images directly.
- Tame pack size and speed up packing
- Before push:
# Clean and repack
git gc --prune=now
# Optionally, an explicit repack for large repos
git repack -Ad
# Reduce compression level to speed up pack creation
git config --local core.compression 1
- Repository health
# Look for corruption (rare but worth checking)
git fsck
- If HTTPS keeps failing, try SSH (if your server supports it)
# Change origin to SSH (example for GitHub-like remotes)
git remote set-url origin git@host:org/repo.git
Common scenarios and targeted fixes
HTTP/2 resets via proxies/load balancers
- Symptom: “HTTP/2 stream was not closed cleanly”
- Fix: git config --global http.version HTTP/1.1
Large push times out
- Symptom: curl 28 timeout during POST/send-pack
- Fixes: lower compression, run git gc/repack, extend lowSpeedTime, reduce http.maxRequests, ensure stable network
Big binaries in history
- Symptom: push fails after sending some objects; remote rejects size
- Fixes: migrate to Git LFS; if history already contains large files, rewrite history (e.g., git filter-repo) and force-push per policy
Very large clone/fetch
- Symptom: “remote end hung up unexpectedly” mid-download
- Fixes: shallow or partial clone; fetch in smaller depths; ensure proxy allows long-lived connections
Corporate proxy issues
- Symptom: failures only on corporate network
- Fixes: set http/https.proxy; force HTTP/1.1; coordinate with network team; avoid disabling SSL verification
Pitfalls to avoid
- Relying on deprecated or ineffective knobs like http.postBuffer. It rarely helps modern Git/cURL stacks.
- Disabling SSL verification (http.sslVerify=false). This is insecure; use only for short-lived diagnostics in trusted environments.
- Overusing git gc --aggressive on CI. It is CPU-heavy and often unnecessary.
- Pushing huge binaries without LFS. It bloats history and triggers timeouts/quotas.
- Blind force-push after history rewrites. Coordinate with your team and CI before replacing remote history.
Performance notes
- Prefer partial clone (--filter=blob:none) for large monorepos; it dramatically reduces initial transfer.
- Lower core.compression (e.g., 1) to speed pack creation on push; bandwidth use increases slightly.
- Keep pack.threads moderate (2–4) on limited CPUs; on strong machines, omit to auto-detect.
- Reduce http.maxRequests when intermediaries struggle with parallel streams.
- Use shallow fetches in CI to minimize network load (e.g., --depth=50).
Tiny FAQ
Why does forcing HTTP/1.1 help? Many proxies and load balancers mishandle long HTTP/2 streams used by Git; HTTP/1.1 is more compatible.
Should I increase some "buffer" setting? No. The commonly cited http.postBuffer tweak does not address modern causes of RPC failures.
How do I know if LFS is needed? If you have binaries larger than ~50–100 MB or many medium-sized binaries, use LFS to avoid huge packfiles.
Can I recover from a failed push? Yes. Fix the transport or repository issues, then rerun git push -v. Git resumes from scratch; there is no partial server state to clean in most hosts.
Is SSH more reliable than HTTPS? It can be in environments with strict proxies. If your host supports SSH, it’s a valid fallback.