Skip to content

Commit 6a00a4b

Browse files
authored
Merge pull request #2136 from HackTricks-wiki/research_update_src_binary-exploitation_libc-heap_use-after-free_first-fit_20260416_140120
Research Update Enhanced src/binary-exploitation/libc-heap/u...
2 parents bb25860 + bfa7e4e commit 6a00a4b

1 file changed

Lines changed: 65 additions & 60 deletions

File tree

  • src/binary-exploitation/libc-heap/use-after-free

src/binary-exploitation/libc-heap/use-after-free/first-fit.md

Lines changed: 65 additions & 60 deletions
Original file line numberDiff line numberDiff line change
@@ -46,91 +46,96 @@ d = malloc(20); // a
4646
```
4747

4848
---
49-
### 🔥 Modern glibc considerations (tcache 2.26)
49+
### Modern glibc considerations (tcache >= 2.26)
5050

51-
Since glibc 2.26 every thread keeps its own **tcache** that is queried *before* the unsorted bin. Therefore a first-fit scenario will **only be reached if**:
51+
On current glibc, "first fit" is still useful, but it is **not** the whole allocator story anymore:
5252

53-
1. The requested size is **larger than `tcache_max`** (0x420 on 64-bit by default), *or*
54-
2. The corresponding tcache bin is **already full or emptied manually** (by allocating 7 elements and keeping them in use).
53+
1. **Tcache is checked first**. If the requested size has entries in tcache, the allocator never reaches the unsorted bin.
54+
2. **Exact fits found in the unsorted bin may be diverted into tcache first** while glibc is filling the per-thread cache.
55+
3. For **small requests**, glibc has a special `last_remainder` path that can be used before the generic unsorted-bin walk.
56+
4. Very large requests can be served with **`mmap`** instead of the arena heap, so there may be no reusable unsorted chunk at all.
5557

56-
In real exploits you will usually add a helper routine such as:
58+
In practice, a first-fit primitive is easiest to reproduce when:
59+
60+
- The request size is **larger than `tcache_max`** (1032 bytes by default on 64-bit), or
61+
- The corresponding tcache bin is already **full** (`tcache_count` defaults to 7), or
62+
- Tcache was disabled by the environment
5763

5864
```c
59-
// Drain the tcache for a given size
60-
for(int i = 0; i < 7; i++) pool[i] = malloc(0x100);
61-
for(int i = 0; i < 7; i++) free(pool[i]);
65+
for (int i = 0; i < 7; i++) pool[i] = malloc(0x100);
66+
for (int i = 0; i < 7; i++) free(pool[i]); // fill tcache[0x110]
67+
```
68+
69+
If you control the environment, the following tunables are useful while studying or debugging allocator behaviour:
70+
71+
```bash
72+
GLIBC_TUNABLES=glibc.malloc.tcache_count=0 ./binary
73+
GLIBC_TUNABLES=glibc.malloc.mmap_threshold=0x200000 ./binary
6274
```
6375

64-
Once the tcache is exhausted, subsequent frees go to the unsorted bin and classic first-fit behaviour (tail search, head insertion) can be triggered again.
76+
The first disables tcache entirely. The second is helpful when a PoC unexpectedly gets mmapped chunks instead of arena chunks.
6577

6678
---
67-
### 🚩 Crafting an overlapping-chunk UAF with first-fit
79+
### Reliable first-fit UAF
6880

69-
The fragment below (tested on glibc 2.38) shows how the splitter in the unsorted bin can be abused to create 2 **overlapping pointers** – a powerful primitive that converts a single free into a write-after-free.
81+
The most direct primitive is still the old one: free a chunk and immediately request a slightly smaller or equal size so the allocator gives you **the same region back**.
7082

7183
```c
72-
#include <stdio.h>
73-
#include <stdlib.h>
74-
#include <string.h>
84+
char *a = malloc(0x512);
85+
char *b = malloc(0x256);
86+
strcpy(a, "this is A!");
87+
free(a);
88+
89+
char *c = malloc(0x500); // returns the old "a" region
90+
strcpy(c, "this is C!");
91+
```
7592
76-
int main(){
77-
setbuf(stdout, NULL);
93+
If the program still keeps a pointer to `a`, you now have a classic UAF:
7894
79-
/* 1. prepare 2 adjacent chunks and free the first one */
80-
char *A = malloc(0x420); // big enough to bypass tcache
81-
char *B = malloc(0x420);
82-
strcpy(A, "AAAA\n");
83-
free(A); // A → unsorted
95+
- Reads through `a` disclose the new contents of `c`
96+
- Writes through `a` corrupt `c`
97+
- If the reused chunk is interpreted as a different structure, stale pointers/flags can become attacker-controlled
8498
85-
/* 2. request a *smaller* size to force a split of A */
86-
char *C = malloc(0x400); // returns lower half of former A
99+
This is the core idea behind many note-manager and account-manager heap challenges: the allocator itself is behaving correctly, but the application still trusts a pointer to memory that has already been recycled.
87100
88-
/* 3. The remainder of A is still in the unsorted bin.
89-
Another 0x400-byte malloc will now return the *same*
90-
region pointed to by B – creating a UAF/overlap. */
91-
char *C2 = malloc(0x400);
101+
---
102+
### Using first-fit splits for leaks or overlap
92103
93-
printf("B = %p\nC2 = %p (overlaps B)\n", B, C2);
104+
The important modern nuance is that **splitting an unsorted-bin chunk does not magically create an overlap by itself**. To get something stronger than a simple reallocation/UAF, you usually need an extra bug such as:
94105
95-
// Arbitrary write in B is immediately visible via C2
96-
memset(B, 'X', 0x10);
97-
fwrite(C2, 1, 0x10, stdout); // prints Xs
98-
}
99-
```
106+
- A heap overflow that corrupts the size of a neighboring free chunk
107+
- An off-by-one that changes which chunk will be split
108+
- Another overlap that lets you keep a pointer into the remainder after the split
100109
101-
Exploitation recipe (common in recent CTFs):
110+
This is what recent CTFs tend to do. A common pattern is:
102111
103-
1. **Drain** the tcache for the target size.
104-
2. **Free** a chunk so it lands in the unsorted bin.
105-
3. **Allocate** a slightly smaller size – the allocator splits the unsorted chunk.
106-
4. **Allocate** again – the leftover part overlaps with an existing in-use chunk → UAF.
107-
5. Overwrite sensitive fields (function pointers, FILE vtable, etc.)
112+
1. Force a large chunk into the unsorted bin.
113+
2. Corrupt its size so the allocator believes a larger free region exists.
114+
3. Request a smaller chunk from that forged region.
115+
4. Let glibc split it and leave the **remainder** in the unsorted bin.
116+
5. Reuse a still-reachable pointer that now overlaps the remainder and read its `fd`/`bk` pointers for a libc leak, or write through it to prepare a later tcache/fastbin attack.
108117
109-
A practical application can be found in the 2024 HITCON Quals *Setjmp* challenge where this exact primitive is used to pivot from a UAF to full control of `__free_hook`.
118+
In other words, modern first-fit is usually the **reuse/split stage** inside a longer chain, not the whole exploit by itself. The 2024 AngstromCTF `heapify` write-up is a good example: unsorted-bin splitting is used after a metadata corruption to keep a libc-bearing remainder overlapping attacker-controlled data. The HITCON 2024 `setjmp` write-up is another good reminder that you often need extra heap grooming just to get the allocator into the right state before the first-fit primitive becomes reachable.
119+
120+
> [!TIP]
121+
> If a "first-fit" PoC stops working on a modern target, check these before debugging anything else:
122+
>
123+
> - Is the free going to **tcache** instead of unsorted?
124+
> - Is the request crossing into **`mmap`** territory?
125+
> - Are the free and malloc happening in the **same thread/arena**?
126+
> - Is glibc serving a **`last_remainder`** small chunk instead of the unsorted tail you expected?
110127
111128
---
112-
### 🛡️ Mitigations & Hardening
129+
### Mitigations & Hardening
113130
114-
* **Safe-linking (glibc 2.32)** only protects the singly-linked *tcache*/**fastbin** lists. The unsorted/small/large bins still store raw pointers, so first-fit based overlaps remain viable if you can obtain a heap leak.
115-
* **Heap pointer encryption & MTE** (ARM64) do not affect x86-64 glibc yet, but distro hardening flags such as `GLIBC_TUNABLES=glibc.malloc.check=3` will abort on inconsistent metadata and can break naïve PoCs.
116-
* **Filling tcache on free** (proposed in 2024 for glibc 2.41) would further reduce unsorted usage; monitor future releases when developing generic exploits.
131+
- **Safe-linking (glibc >= 2.32)** protects tcache/fastbin forward pointers, but it does **not** change how unsorted-bin chunks are reused or split.
132+
- **Malloc hooks were removed in glibc 2.34**, so modern first-fit chains usually pivot into arbitrary read/write, FILE-structure corruption, tcache poisoning, or application-specific function pointers instead of `__malloc_hook`/`__free_hook`.
133+
- Integrity checks in the doubly-linked bins still matter. If your exploit relies on a split remainder surviving long enough to be reused, any corrupted `fd`/`bk` pointers will crash the program before you get value from the primitive.
117134
118-
---
119-
## Other References & Examples
120-
121-
- [**https://heap-exploitation.dhavalkapil.com/attacks/first_fit**](https://heap-exploitation.dhavalkapil.com/attacks/first_fit)
122-
- [**https://8ksec.io/arm64-reversing-and-exploitation-part-2-use-after-free/**](https://8ksec.io/arm64-reversing-and-exploitation-part-2-use-after-free/)
123-
- ARM64. Use after free: Generate an user object, free it, generate an object that gets the freed chunk and allow to write to it, **overwriting the position of user->password** from the previous one. Reuse the user to **bypass the password check**
124-
- [**https://ctf-wiki.mahaloz.re/pwn/linux/glibc-heap/use_after_free/#example**](https://ctf-wiki.mahaloz.re/pwn/linux/glibc-heap/use_after_free/#example)
125-
- The program allows to create notes. A note will have the note info in a malloc(8) (with a pointer to a function that could be called) and a pointer to another malloc(<size>) with the contents of the note.
126-
- The attack would be to create 2 notes (note0 and note1) with bigger malloc contents than the note info size and then free them so they get into the fast bin (or tcache).
127-
- Then, create another note (note2) with content size 8. The content is going to be in note1 as the chunk is going to be reused, were we could modify the function pointer to point to the win function and then Use-After-Free the note1 to call the new function pointer.
128-
- [**https://guyinatuxedo.github.io/26-heap_grooming/pico_areyouroot/index.html**](https://guyinatuxedo.github.io/26-heap_grooming/pico_areyouroot/index.html)
129-
- It's possible to alloc some memory, write the desired value, free it, realloc it and as the previous data is still there, it will treated according the new expected struct in the chunk making possible to set the value ot get the flag.
130-
- [**https://guyinatuxedo.github.io/26-heap_grooming/swamp19_heapgolf/index.html**](https://guyinatuxedo.github.io/26-heap_grooming/swamp19_heapgolf/index.html)
131-
- In this case it's needed to write 4 inside an specific chunk which is the first one being allocated (even after force freeing all of them). On each new allocated chunk it's number in the array index is stored. Then, allocate 4 chunks (+ the initialy allocated), the last one will have 4 inside of it, free them and force the reallocation of the first one, which will use the last chunk freed which is the one with 4 inside of it.
132-
- 2024 HITCON Quals Setjmp write-up (Quarkslab) – practical first-fit / unsorted-split overlap attack: <https://ctftime.org/writeup/39355>
133-
- Angstrom CTF 2024 *heapify* write-up – abusing unsorted-bin splitting to leak libc and gain overlap: <https://hackmd.io/@aneii11/H1S2snV40>
135+
## References
136+
137+
- [https://blog.quarkslab.com/heap-exploitation-glibc-internals-and-nifty-tricks.html](https://blog.quarkslab.com/heap-exploitation-glibc-internals-and-nifty-tricks.html)
138+
- [https://hackmd.io/@aneii11/H1S2snV40](https://hackmd.io/@aneii11/H1S2snV40)
134139
135140
136141

0 commit comments

Comments
 (0)