суббота, 18 сентября 2021 г.

linux kernel uprobes

Lets consider another spying mechanism in linux kernel - uprobes. They also insert int3 but this time in user-mode and can be used for example to steal TLS traffic. I made simple code to set up uprobe for /usr/bin/ls on PLT thunk getenv:

objdump -d /usr/bin/ls
0000000000004710 <getenv@plt>:
    4710: f3 0f 1e fa          endbr64 
    4714: f2 ff 25 5d e5 01 00 bnd jmpq *0x1e55d(%rip)        # 22c78 <getenv@GLIBC_2.2.5>
    471b: 0f 1f 44 00 00        nopl   0x0(%rax,%rax,1)

now run ls
ls -i /usr/bin/ls
1043126 /usr/bin/ls 
dmesg | tail
[258600.533089] uprobe ret_handler is executed, ip = 55EAECA62B54
[258600.533090] uprobe handler in PID 43831 executed, ip = 55eaeca56710
[258600.533093] uprobe ret_handler is executed, ip = 55EAECA62B6C
[258600.533095] uprobe handler in PID 43831 executed, ip = 55eaeca56710
[258600.533098] uprobe ret_handler is executed, ip = 55EAECA5861C
[258600.533099] uprobe handler in PID 43831 executed, ip = 55eaeca56710
[258600.533102] uprobe ret_handler is executed, ip = 55EAECA57F60
[258600.533111] uprobe handler in PID 43831 executed, ip = 55eaeca56710
[258600.533114] uprobe ret_handler is executed, ip = 55EAECA57A77

And you can`t see which uprobes are installed - file /sys/kernel/debug/tracing/uprobe_events is empty. NSA can hide their anal catheters even in opened sources, yeah. So I wrote code to dump all uprobes (stored in uprobes_tree) and consumers of each uprobe

четверг, 9 сентября 2021 г.

linux kernel kprobes

without a doubt most crazy and insane spying mechanism in linux kernel is krobes

  1. It`s expensive - each time when int3 occurred typical call stack looks like:
  2. It makes working with kdbg (which itself is too far away from windbg) like nightmare - function do_int3 first calls kgdb_ll_trap
  3. There is no mechanism to predict which functions cannot be kprobed. Let assume that your handler uses simple printk - so you can`t set kprobe on whole graph of functions called from printk (like vprintk_func, vprintk_default, vprintk_emit, __msecs_to_jiffies, arch_touch_nmi_watchdog, touch_softlockup_watchdog, __printk_safe_enter, _raw_spin_lock, vprintk_store, vscnprintf, cont_flush etc etc) and as far I know there is no way to even find them all
  4. Sure you have /sys/kernel/debug/kprobes/list file so you can see which functions was hooked. But there is no way to know by whom
So I wrote dumper of installed kprobes. Sample of output:

sudo ./lkmem -k -c ~/krnl/curr ~/krnl/System.map-5.11.0-34-generic
kprobes[47]: 1
 kprobe at 0xffffffffc0605080 flags 8
  addr: 0xffffffffa4a9f040 - kernel!__do_sys_fork
  pre_handler: 0xffffffffc0603548 - lkcd
  post_handler: 0xffffffffc0603526 - lkcd

понедельник, 6 сентября 2021 г.

linux-kernel per-cpu vars

It`s hard to believe but linux has degraded version of KPCR on windows - so called "per-cpu variables". This is some isolated memory assigned to CPU (stored in gs segment register on x64 and in MSR register c13 on arm64) and can contains some interesting fields. Why this is important to know offsets some of this variables? Well, I suspect that linux kernel contains much more code for espionage than windows (for example trace events, tracepoints, kprobes, usb_mon_register etc etc). One of such code is function user_return_notifier_register with which you can register your own notifications. Unfortunately this list of notifications stored in per-cpu variable return_notifier_list

And as usually there is no some include file with definition of all of this per-cpu fields. Moreover this offsets depend from config for kernel building and differ in each build. Sounds like nightmare, reason to turn off the computer and go drink vodka looking at the autumn rain.

Or not? Lets see in disasm some functions using this var - like fire_user_return_notifiers:
fire_user_return_notifiers proc near
 call    __fentry__ ; another entry for spy code
 mov     rax, offset unk_29450
 add     rax, gs:this_cpu_off ; .data..percpu:0000000000011368
 mov     rdi, [rax]

In this build return_notifier_list happens to have offset 0x29450 and this_cpu_off 0x11368. 
Well, we can use disasm to get offsets to both return_notifier_list & this_cpu_off and then write code like:
; rdi - this_cpu_off
; rsi - offset
mov rax, [gs:rdi]
add rax, rsi

Patch on github to extract this_cpu_off & return_notifier_list with some disasm magic

суббота, 28 августа 2021 г.

linux kernel tracing

It`s hard to believe but linux kernel has almost exact copy of windows ETW - event tracing. It is just as difficult to make it work, it is poorly documented, very complex and fragile. And yes, as you can guess - it also can`t show who and which parts of it in use. So I wrote some code to dump registered funcs in tracepoints and to check file ops for files in /sys/kernel/tracing/events

Lets start with tracepoints. As you see this structure has strange looked list of functions in field funcs, and calling happens in functions like event_triggers_call. How we can find this tracepoints? Well,  they stored in trace_event_call->tp and array of pointers to trace_event_call located between symbols __start_ftrace_events__stop_ftrace_events. Unfortunately all this treasures located in discardable section .init.data. But because they were all declared in the same manner we can find them by name - all symbols with prefix __tracepoint_ is what we need. So some examples (you can run lkmem -c -t vmlinux system.map to get this):

 __tracepoint_sys_enter at 0xffffffff8b82e340: enabled 0 cnt 0
  regfunc 0xffffffff8a192330 - kernel!syscall_regfunc
  unregfunc 0xffffffff8a1923f0 - kernel!syscall_unregfunc

Well, no clients right now - cnt 0

Next about /sys/kernel/tracing/events files (this is perverted inhuman interface to manage trace events). I just dumping file->f_path.dentry->d_inode->i_fop for each such file. Sample of output (you can achieve this with lkmem -s vmlinux system.map path_to_some_sys_kernel_tracing_file):

пятница, 27 августа 2021 г.

arm64 disasm for linux kernel

I added today disassembler for arm64 linux kernel to search pointers. It turned out to be surprisingly difficult to do for several reasons (disasm for x64 is only 383 LOC vs 618 for arm64)

One of them is poor code produced by some gcc versions

But the main problem is arm64 opcodes. Lets see simple indirect call:
  ADRP            X27, #mh_filter@PAGE
  CMP             W22, #0x3A ; ':'
  B.EQ            loc_FFFFFFC010CC7140
  CMP             W22, #0x87
  B.NE            loc_FFFFFFC010CC7188
  LDR             X2, [X27,#mh_filter@PAGEOFF]
  CBZ             X2, loc_FFFFFFC010CC7188
  MOV             X1, skb
  MOV             X0, X28
  BLR             X2

compare this with code to call list of funcs from tracepoints:
  ADRP            __data, #__tracepoint_cpu_idle@PAGE
  ADD             X0, X0, #__tracepoint_cpu_idle@PAGEOFF
  MOV             X29, SP
  STR             X19, [SP,#var_s10]
  LDR             X19, [X0,#(__tracepoint_powernv_throttle.funcs - 0xFFFFFFC011A562C0)]
  LDR             X4, [X19]
  MOV             W3, W20
  LDR             X0, [X19,#8]
  MOV             X2, X21
  MOV             W1, W22
  BLR             X4
  LDR             X0, [X19,#0x18]!
  CBNZ            X0, loc_FFFFFFC01011FC60

In second case register X4 was loaded from X19, which in turn was loaded from some memory, so I need to track how many times content of register was loaded

Anyway results is +34 newly discovered functions pointers

понедельник, 23 августа 2021 г.

functions pointers in linux kernel data sections

I wrote simple program to estimate size of problem. Yes, I know about CFI but it seems that even on kernel 5.11 on fresh Ubuntu this mechanism is not implemented and indirect calls looks like:

  mov     rax, cs:XXX
  call    __x86_indirect_thunk_rax

__x86_indirect_thunk_rax proc near: 
  jmp     rax

First approach is just to scan .data section - you can do this running

./lkmem path-to-unpacked-kernel path-to-System.map

Some results:
  • arm64 5.11.0: 9893
  • x64 5.8-53: 10698
  • x64 5.11.0: 13414
  • x64 4.18: 16224
Ok, how about not yet inited pointers (or pointers in .bss section)? We need use disassembler - just disasm all functions in .text and find indirect calls and calls to __x86_indirect_thunk_XXX. Results (with -d option):
  • x64 4.18: +42
  • x64 5.8-53: +52
  • x64 5.11.0: +45
and with .bss section (option -b):
  • x64 4.18: +99
  • x64 5.8-53: +120
  • x64 5.11.0: +109

воскресенье, 15 августа 2021 г.

dumper of linux kernel notification chains

There seems to be one little-known thing in linux kernel - notification chains. So they have literal analogue of PsSetLoadImageNotifyRoutine - function register_module_notifier. And similarly they don't have a function to enumerate registered notifications - I don`t know why. Maybe they were bitten by Microsoft. Or maybe I want too much from people whose even "The Linux Kernel Module Programming Guide" contains an error in the code example. Anyway I decided to write my own (btw the last time I wrote drivers for Linux was something around 20 years ago)

How to run

git clone https://github.com/redplait/lkcd.git
cd lkcd
sudo insmod ./lkcd.ko
cd test
sudo ./dtest

Sample of output (from fresh Ubuntu):