You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/binary-exploitation/libc-heap/use-after-free/first-fit.md
+12-2Lines changed: 12 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -54,10 +54,11 @@ On current glibc, "first fit" is still useful, but it is **not** the whole alloc
54
54
2.**Exact fits found in the unsorted bin may be diverted into tcache first** while glibc is filling the per-thread cache.
55
55
3. For **small requests**, glibc has a special `last_remainder` path that can be used before the generic unsorted-bin walk.
56
56
4. Very large requests can be served with **`mmap`** instead of the arena heap, so there may be no reusable unsorted chunk at all.
57
+
5. On **glibc 2.42+**, tcache can optionally cache much larger chunks if `glibc.malloc.tcache_max` is raised, so "`> 0x410` means unsorted" is no longer a safe assumption on tuned targets.
57
58
58
59
In practice, a first-fit primitive is easiest to reproduce when:
59
60
60
-
- The request size is **larger than `tcache_max`** (1032 bytes by default on 64-bit), or
61
+
- The request size is **larger than `tcache_max`** (1032-byte request by default on 64-bit, unless the target raised it), or
61
62
- The corresponding tcache bin is already **full** (`tcache_count` defaults to 7), or
62
63
- Tcache was disabled by the environment
63
64
@@ -70,10 +71,11 @@ If you control the environment, the following tunables are useful while studying
The first disables tcache entirely. The second is helpful when a PoC unexpectedly gets mmapped chunks instead of arena chunks.
78
+
The first disables tcache entirely. The second is useful on glibc `2.42+` when a lab or challenge runner raised `tcache_max` and large chunks that used to hit unsorted are still being cached. The third is helpful when a PoC unexpectedly gets mmapped chunks instead of arena chunks.
77
79
78
80
---
79
81
### Reliable first-fit UAF
@@ -117,6 +119,13 @@ This is what recent CTFs tend to do. A common pattern is:
117
119
118
120
In other words, modern first-fit is usually the **reuse/split stage** inside a longer chain, not the whole exploit by itself. The 2024 AngstromCTF `heapify` write-up is a good example: unsorted-bin splitting is used after a metadata corruption to keep a libc-bearing remainder overlapping attacker-controlled data. The HITCON 2024 `setjmp` write-up is another good reminder that you often need extra heap grooming just to get the allocator into the right state before the first-fit primitive becomes reachable.
119
121
122
+
Two recurring modern patterns are:
123
+
124
+
- **Remainder-preserving overlap**: corrupt a free chunk size, ask for a slightly smaller allocation, and keep a still-reachable pointer into the unsorted **remainder**. `heapify` uses this to leave libc pointers (`fd`/`bk`) readable after the split and then pivots into a later [Tcache Bin Attack](../tcache-bin-attack.md).
125
+
- **Leak-preserving reallocation**: force a libc-bearing chunk into a reusable bin, then reallocate it through an application path that writes only a few bytes (or nothing) so the leaked arena pointer survives inside the recycled region. `setjmp` reaches this state after extra heap grooming and a `malloc_consolidate()`-driven libc leak.
126
+
127
+
If what you really control is the unsorted-bin metadata rather than the recycled user area, check [Unsorted Bin Attack](../unsorted-bin-attack.md) instead. First fit often gives you the reusable chunk; the actual arbitrary write may come from a later primitive.
128
+
120
129
> [!TIP]
121
130
> If a "first-fit" PoC stops working on a modern target, check these before debugging anything else:
122
131
>
@@ -136,6 +145,7 @@ In other words, modern first-fit is usually the **reuse/split stage** inside a l
0 commit comments