Overview
Git’s “error: RPC failed” usually indicates the connection broke while sending or receiving packfiles. Typical causes:
- Large files or gigantic history creating huge packfiles
- Slow or flaky networks, proxies, or middleboxes (HTTP/2 quirks)
- Server-side limits (file size, bandwidth, timeouts)
- Misconfigured clients (compression, keepalive)
This guide focuses on quick diagnostics and fixes that work in local dev and CI.
Quickstart (triage and common fixes)
- Capture details: run with verbose tracing
- Linux/macOS:
- GIT_TRACE=1 GIT_CURL_VERBOSE=1 git push
- Windows PowerShell:
- $env:GIT_TRACE=1; $env:GIT_CURL_VERBOSE=1; git push
- Linux/macOS:
- If errors mention large transfers or timeouts:
- Use Git LFS for large files
- Split work: push smaller commits/branches
- Use HTTP/1.1: git config --global http.version HTTP/1.1
- Shallow/partial operations for fetch/clone
- If behind a proxy:
- Try without proxy: git -c http.proxy= -c https.proxy= clone <url>
- Or switch transport to SSH
- Retry with reduced concurrency and longer timeouts
Minimal working example (large file push → fix with LFS)
# 1) Create a small repo with a large file
mkdir demo-rpc-failed && cd demo-rpc-failed
git init
python - << 'PY'
with open('big.bin', 'wb') as f:
f.write(b'\0' * 120 * 1024 * 1024) # ~120MB
PY
git add big.bin
git commit -m "Add large binary"
# 2) Try pushing to a remote that enforces size limits (example)
# git remote add origin https://example.com/your/repo.git
# git push -u origin main
# Many servers will fail with an RPC error or reject large files
# 3) Fix: use Git LFS
git lfs install
git lfs track "*.bin"
echo "*.bin filter=lfs diff=lfs merge=lfs -text" >> .gitattributes
# Move file to be re-added via LFS pointers
rm big.bin
python - << 'PY'
with open('big.bin', 'wb') as f:
f.write(b'\0' * 120 * 1024 * 1024)
PY
git add .gitattributes big.bin
git commit -m "Track binaries with LFS"
# 4) Push again
# git push -u origin main
Notes:
- LFS uploads binary content via a separate endpoint, avoiding huge Git packfiles.
- If the repo already has big binaries in history, use
git lfs migrateto rewrite history (coordinate with your team before force-pushing).
Step-by-step troubleshooting
- Identify the failure mode
- Inspect messages: examples include “curl 18 transfer closed”, “sideband” disconnects, or “HTTP/2 stream was not closed cleanly”.
- Enable debug envs:
- GIT_TRACE=1 GIT_CURL_VERBOSE=1 GIT_TRACE_PACKET=1 git push
- Determine transport:
git remote -v(http/https vs ssh).
- Large files or large packfiles
- Prefer Git LFS for binaries and media:
- git lfs install
- git lfs track "*.{zip,bin,mp4}"; commit
.gitattributes
- Split pushes:
- Push smaller batches of commits or narrow branches first.
- Reduce history transferred when fetching:
- Shallow:
git fetch --depth=1orgit clone --depth=1 <url> - Partial clone (server must support):
git clone --filter=blob:none <url>
- Shallow:
- Network, proxy, and HTTP/2 quirks
- Force HTTP/1.1 (mitigates some proxy/HTTP2 issues):
- git config --global http.version HTTP/1.1
- If via proxy, test bypass vs configured proxy:
- Temporary bypass:
git -c http.proxy= -c https.proxy= clone <url> - Persist removal:
git config --global --unset http.proxy(and https.proxy)
- Temporary bypass:
- Increase tolerance for slow links:
- git config --global http.lowSpeedTime 600
- Reduce parallelism of HTTP requests (can help fragile proxies):
- git config --global http.maxRequests 2
- Consider switching to SSH transport:
- git remote set-url origin git@host:org/repo.git
- With keepalive:
GIT_SSH_COMMAND="ssh -o ServerAliveInterval=30 -o ServerAliveCountMax=6" git push
- Repository health and packing
- Clean and repack to avoid pathological packs before pushing:
- git gc --prune=now
- git repack -ad
- Tune compression/threads on small CI agents:
- git config --global pack.threads 1
- Avoid the old
http.postBufferadvice; modern Git ignores it and it does not fix RPC failures.
- CI-friendly patterns
- Use shallow and filtered clones to cut time and bytes:
- git clone --depth=1 --no-tags --filter=blob:none <url>
- Retry logic with backoff around fetch/push in flaky networks.
- Cache repositories between jobs (e.g.,
~/.gitalternates) to reduce transfer. - For submodules: sync and shallow-update them explicitly:
- git submodule sync --recursive
- git submodule update --init --recursive --depth=1
Common scenarios and fixes
- Push rejected at ~100MB files
- Use Git LFS; many hosts reject large blobs in regular Git.
- Clone stalls mid-transfer behind corporate proxy
- Force HTTP/1.1; lower
http.maxRequests; or switch to SSH.
- Force HTTP/1.1; lower
- Pull/fetch of monorepo times out
- Use partial clone:
--filter=blob:noneand sparse-checkout for needed paths.
- Use partial clone:
- Random disconnects on slow links
- Increase
http.lowSpeedTime; retry; reduce pack threads.
- Increase
Pitfalls
- Do not disable SSL verification permanently; use only for short tests.
http.postBufferis obsolete; do not rely on it.- History rewrites (e.g., LFS migrate, filter-repo) require coordinated force-push and downstream resync.
- LFS requires client install in all environments (dev, CI) before clone/pull.
- Mixing HTTP and SSH remotes can confuse credentials; standardize per repo.
Performance notes
- Prefer partial clone over very shallow clones for long-lived CI caches; it reduces blobs transferred while keeping history structure.
- Use sparse-checkout in monorepos to reduce working tree size.
- On constrained agents, reduce pack threads to avoid CPU contention.
- Regular
git gckeeps packs reasonable and speeds pushes.
FAQ
- Does increasing http.postBuffer fix this?
- No. Modern Git ignores it; it doesn’t solve RPC failures.
- Should I disable SSL verification?
- Only for brief diagnostics. Fix the root cause (certs, proxy) instead.
- Why does switching to HTTP/1.1 help?
- Some proxies/middleboxes mishandle HTTP/2 streams during large pack transfers.
- Is Git LFS mandatory for large binaries?
- It’s the recommended, reliable approach for big binary assets on most hosts.
- My shallow clone still fails—now what?
- Try partial clone filters, enforce HTTP/1.1, and check proxies/timeouts.