вторник, 31 марта 2026 г.

dumping llvm bitcode from cicc

requires building of hijacked .so with appropriate LLVM version. I am too lazy for this
 
cool, but does not work - cicc claims on bad arguments. I've tried many combinations with no luck
 
But hey - we are under linux and can make many hacks, for example check what arguments genuine nvcc passing to cicc. For this I ran nvcc -dc -keep under strace:
strace -o c.strace -s 512 -f --trace=/^exec nvcc ...
Arguments:
  • -s NUM - maximum string size, bcs arguments can be very long - I set this parameter to 512
  • -f - trace child processes
  • and finally --trace - since I don't know which exactly syscall used to launch processes I used regex syntax for all calls starting with exec

Lets check output file c.strace and see launches of

  • gcc/cc1plus
  • cicc
  • ptxas
  • fatbinary
  • bin2c
  • cudafe++
  • etc

After some trials right combination of arguments for cicc is
NVVMCCWIZ=553282 cicc --nv_arch compute_XX --device-c -keep 1.cpp1.ii
ls -l *.bc
-rw-rw-r-- 1 redp redp 8072 mar 31 13:25 1.lgenfe.bc
-rw-rw-r-- 1 redp redp 9988 mar 31 13:25 1.lnk.bc
-rw-rw-r-- 1 redp redp 6500 mar 31 13:25 1.opt.bc

lgenfe.bc - bitcode from front-end

opt.bc - bitcode after all optimization passes

to disassembly we can now just use llvm-dis-21:

  %1 = tail call i32 asm sideeffect "activemask.b32 $0;", "=r"() #3, !dbg !11
  %2 = tail call { i32, i1 } @llvm.nvvm.shfl.sync.i32(i32 %1, i32 3, i32 %val, i32 16, i32 31) #3, !dbg !17
  %3 = extractvalue { i32, i1 } %2, 0, !dbg !17

четверг, 26 марта 2026 г.

dwarf from nvcc

I've add some support of DWARF debug info from nvidia nvcc to my dwarfdump. As everyone knows dwarf is over-complicated, fat and just disgusting - however, nvidia was able to take his nausea to a new level

relocs

their cuda-gdb does not contains reloc_howto_type for CUDA relocs - it's special kind of bare minimal open-source when they publish as little code as possible. So my implementation highly likely incomplete and wrong

locations

stored in section .debug_loc - that's ok, although the last time gсс used them was somewhere around the time of version 4. Also nvidia introduced new attribute DW_AT_address_class for addresses in different segments. Cool, but for example for ADDR_const_space you can't get in which constant bank those address was placed

register names

this is main nightmare

среда, 18 марта 2026 г.

read a couple of books about compilers

LLVM Compiler for RISC-V Architecture

Describes details of risc-v vectorization support in llvm. It should be noted that the implementation of vector operations in risc-v was done later than in Intel and sve in arm64 - they took into account many flaws (like made explicit masks for vector operations) and were implemented in a much more convenient way from the programmer's point of view
On other hand any HW vendor can add it's own ISA subset and support of this custom processors in compiler can become very segmented and pure nightmare
 
Also I want to note that support of risc-v vectors in LLVM carefully avoids MLIR (IMHO second most overrated thing after LLM) - to do this they even had to patch their holy cow tablegen
 
Drawbacks:
  • there is no introduction about LLVM IR/risc-v specific IR, so long IR listings are very hard to follow
  • author don't give link to source code implementing some algo. Fortunately elixir indexed whole LLVM source tree
4/5

Dive into Deep Learning Compiler

As far as I know, this is the only book describing AI/ML compilers so far. Also TVM looks very promising - unlike monsters like XLA/iree it is compact and observable for mere mortals

Drawbacks:

  • book is not completed - last two chapter about NN & deployment are just "place holder"
  • it's unclear why for matrix multiplication on CUDA they didn't get cublas as base case
  • and openblas for cpu version

Despite this, considering that the book is freely downloadable, my rating is 4 out of 5

пятница, 6 марта 2026 г.

SASS latency table: second try

In my first attempt I used latency tables extracted from MD file (located inside nvdisasm) and nothing good came out of it

Obvious reason is that real latency table should be located not in disassembler - it must be inside ptxas. But the problem with that file is that it is really huge - in SDK 13 it has size 40Mb. Sure no symbols included

This is not surprisingly bcs it contains lots of things:

  • ptxas parser
  • lots of macros
  • optimizing compiler with 159 passes and don't use LLVM at all
  • code generators for several different SMs

Besides it does not have any tracepoints and big part of string are encrypted. So it took lots of time and patience but finally I found and extracted right latency table

And then a lot of discoveries came my way

четверг, 12 февраля 2026 г.

libcudadebugger.so logger

I've done some research of libcudadebugger.so internals - seems that it has exactly the same patterns:

  • functions table returned by GetCUDADebuggerAPI located in .data section so you can patch any callback address
  • and each API function has logger

This last fact is strange - while loggers from libcuda.so were used by debugger then who consume logs from debugger itself? Check code to load those loggers:

  lea     rdi, aNvtxInjection6          ; "NVTX_INJECTION64_PATH"
  call    _getenv
  mov     rdi, rax                      ; file
  test    rax, rax
  jz      short loc_14B160
  mov     esi, 1                        ; mode
  call    _dlopen
  mov     r13, rax
  test    rax, rax
  jz      short loc_14B190
  lea     rsi, aInitializeinje_1        ; "InitializeInjectionNvtx2"
  mov     rdi, rax                      ; handle
  call    _dlsym
  test    rax, rax
  jz      short loc_14B1A0
  lea     rdi, sub_14A270
  call    rax 
Very straightforward - load shared library from env var NVTX_INJECTION64_PATH and call function InitializeInjectionNvtx2 - part of Cupti API. Btw excellent injection hook
 
Unfortunately these loggers don't collect parameters of API functions - only their names in packets with fixed size 0x30 bytes:
  lea     rax, aFailedCreatede+7        ; "CreateDebuggerSession"
  mov     [rbp+var_18], rax
  mov     rax, cs:dbg_log
  mov     [rbp+var_20], 0
  mov     dword ptr [rbp+var_40], 300003h
  mov     dword ptr [rbp+var_20], 1
  movaps  [rbp+var_30], xmm0
  test    rax, rax
  jz      loc_1470AC
  lea     rdx, [rbp+var_40]
  mov     r12, rdx
  mov     rdi, rdx
  call    rax
Name of called function located at offset 0x28 and in logs looks like

воскресенье, 8 февраля 2026 г.

building cuda-gdb from sources

For some reason cuda-gdb from cuda sdk gives on my machine list of errors like

Traceback (most recent call last):
  File "/usr/share/gdb/python/gdb/__init__.py", line 169, in _auto_load_packages
    __import__(modname)
  File "/usr/share/gdb/python/gdb/command/explore.py", line 746, in <module>
    Explorer.init_env()
  File "/usr/share/gdb/python/gdb/command/explore.py", line 135, in init_env
    gdb.TYPE_CODE_RVALUE_REF : ReferenceExplorer,
AttributeError: 'module' object has no attribute 'TYPE_CODE_RVALUE_REF'

so I decided rebuild it with python version installed in system - and this turned out to be a difficult task

The first question is where the source code? Seems that official repository does not contain cuda specific code - so raison d'être of these repo is totally unclear. I extracted from cuda sdk .deb archive cuda-gdb-13.1.68.src.tar.gz and proceed with it

Second - process of configuring is extremely fragile - if you point single wrong option you will know about it only after 30-40 min. Also it seems that you just can't run configure in sub-dirs, bcs in that case linker will claims about tons of missed symbols. So configuration found by trial and error
configure --with-python=/usr/bin/python3 --enable-cuda

And finally we got file gdb/gdb having size 190 Mb. And after running I got stack trace beginning with
arch-utils.c:1374: internal-error: gdbarch: Attempt to register unknown architecture (2)

This all raises some questions for nvidia:

  • do they testing their cuda sdk before releasing?
  • do they have QA at all or like microsoft just test their ai shit directly on users?
  • from which sources was built original cuda-gdb in fact? 

Well, at least having some suspicious source code we can fix this build

понедельник, 26 января 2026 г.

print & analyse CUDA coredumps

inconvenient cuda-gdb can't automatically processing them - you need explicitly say something like
target cudacore /full/path/to/coredump

and then type lots of info cuda XXX 

So last weekend I wrote tool to parse/dump CUDA coredumps and it even works on machine without CUDA SDK (what might be useful if you collect all crash dumps to some centralized storage with help of CUDA_COREDUMP_PIPE)

But first

Little bit of theory

Format of CUDA coredumps is documented in cudacoredump.h from cuda-gdb.deb
It contains list of devices in .cudbg.devtbl section and 2 groups of data
 
First is list of contexts and attached to them resources like global memory and list of loaded modules in .cudbg.relfimg.devX.ctxY sections. Those modules are just normal ELF files (some from kernel runtime) and most importantly, they contain the load addresses for each section - this is how we can find module/function of faulty instruction

Second group contains whole thread hierarchy:

  • list of SMs in .cudbg.smtbl.devX section
  • list of CTA in  .cudbg.ctatbl.devX.smY sections
  • list of WARPs in .cudbg.wptbl.devX.smY.ctaZ sections
  • and finally list of threads in each warp - in sections .cudbg.lntbl.devX.smY.ctaZ.wpI

Each thread has own set of sections:

  • for call stack - .cudbg.bt.devX.smY.ctaZ.wpI.lnJ
  • registers in .cudbg.regs.devX.smY.ctaZ.wpI.lnJ
  • predicates in .cudbg.pred.devX.smY.ctaZ.wpI.lnJ
  • local memory in .cudbg.local.devX.smY.ctaZ.wpI.lnJ. Curious that those sections has the same addresses
At the same time sections for Uniform registers (.cudbg.uregs.devX.smY.ctaZ.wpI) & predicates (.cudbg.upred.devX.smY.ctaZ.wpI) are attached to WARPs 

Where get faulty instruction address

This is really good question. Actually we have 3 source of addresses:
  1. for driver with version >= 555 SM has field errorPC
  2. WARP has field errorPC too
  3. finally each lane has fields exception & virtualPC in CudbgThreadTableEntry