пятница, 28 ноября 2025 г.

bug in sass MD

Spent couple of days in debugging rare bug in my sass disasm. I tested it on thousands of .cubin files and got bad instruction decoding for one. Btw I never saw papers about testing of disassemblers - compilers like gcc/clang has huge set of tests to detect regressions, so probably I should do the same. The problem is that I periodically add new features and smxx.so files generating every time

My nvd has option -N to dump unrecognized opcodes, so I got for sm55

Not found at E8 0000100000010111111100010101110000011000100000100000000000000011101001110000000000000000

nvdisasm v11 swears that this pile of 0 & 1 must be ISCADD instruction somehow. Ok, lets run ead.pl and check if it can find it:
perl ead.pl -BFvamrzN 0000100000010111111100010101110000011000100000100000000000000011101001110000000000000000 ../data/sm55_1.txt

found 4
........................0.0111000..11...................................................
0000-0-------111111-----0101110001011-------000--00000000000----------------------------
0000000-----------------0101110000111000-00000---00000000000----------------------------
00000--------111111-----01011100000110---000-----00000000000----------------------------
000000-------111111-----0001110---------------------------------------------------------
matched: 0

the first thought was that MD are just too old bcs were extracted from cuda 10, so I made decryptor for cuda 11 (paranoid nvidia removed instructions properties since version 12, so 11 is last source of MD), extracted data, rebuild sm55.cc and sm55.so and run test again

The bug has not disappeared

вторник, 25 ноября 2025 г.

SASS latency table & instructions reordering

In these difficult times, no one wants to report bad or simply weak results (and this will destroy this hypocritical civilization). Since this is my personal blog and I am not looking for grants, I don't care.

Let's dissect one truly inspiring paper - they employed reinforcement learning and claim that

transparently producing 2% to 26% speedup

wow, 26% is really excellent result. So I decided to implement proposed technique, but first I need get source of latency values for each SASS instruction. I extracted files with latency tables from nvdisasm - their names have _2.txt suffixes

Then I made perl binding for my perl version of Ced (see methods for object Cubin::Ced::LatIndex), add new pass (-l option for dg.pl) and done some experiments to dump latency values for each instruction. Because connections order is unknown I implemented all 3:

  1. current column with current row
  2. column from previous instruction with current row
  3. current column with row from previous instruction

The results are discouraging

  • some instructions (~1.5% for best case 1) does not have latency at all (for example S2R or XXXBAR)
  • some instructions have more than 1 index to the same table - well, I fixed this with selecting max value (see function intersect_lat)
  • while comparing with actual stall count the percentage of incorrect values above 60 - it's even worse than just coin flipping

Some possible reasons for failure:

четверг, 13 ноября 2025 г.

sass registers reusing

Lets continue to compose some useful things based on perl driven Ced. This time I add couple of new options to test script dg.pl for registers reusing

What is it at all? Nvidia as usually don't want you to know. It implemented in SASS as set of operand attributes "reuse_src_XX" and located usually in scheduler tables like TABLES_opex_X (more new like reuse_src_e & reuse_src_h are enums of type REUSE)

We can consider registers reusing as hint for GPU scheduler that some register in an instruction can reuse the physical register already allocated to one of its source operands, avoiding a full register allocation and reducing register pressure - or in other words as some registers cache

So the first question is how we can detect size of those cache? I made new pass (option -u) to collect all "reuse" attributes and find maximum of acting simultaneously - see function add_ruc

Results are not very exciting - I was unable to find in cublass functions with cache size more than 2. I remember somewhere in numerous papers about dissecting GPU came across the statement that it is equal to 4 - unfortunately I can't remember name of those paper :-(


 

And the next thing is: can we automatically detect where registers can be reused and patch SASS?

понедельник, 10 ноября 2025 г.

barriers & registers tracking for sass disasm

Finally I add registers tracking in my perl sass disasm

Now I can do some full-featured analysis of sass - like find candidates pairs of instruction to swap/run them in so called "dual" mode - and all of this in barely 1200 LoC of perl code

Let's think what must mean for couple of instructions to be fully independent:

  1. they should belong to the same block - like in case of
      IADD R8, -R3, RZ
    .L_x_14:
      FMUL R11, R3.reuse, R3
    instructions should be treated as located in different blocks
  2. they should not depend from the same barriers
  3. they should not update registers used by each other 

So I implemented building of code-flow graph, barriers & registers tracking

Building of CFG

вторник, 28 октября 2025 г.

sass disasm on perl

as an illustration of the use of the modules presented in my previous post I made yet another sass disasm - fully written on Perl. It is almost exact copy of my nvd - implemented just in 460 LoC, the only unsupported feature is registers tracking - bcs I still don't make perl binding for it. What it can do better than original nvdisasm:

and the most important thing - bcs it's based on Ced - you can patch any instruction from your script. Or customize output/save it somewhere like DB via Perl DBI/add your own passes to reveal some dirty nvidia secrets

like

Barriers

пятница, 17 октября 2025 г.

perl modules for CUBINs patching

After playing a bit with my ced I came to the conclusion that implemented DSL for editing is not enough - like it would be good to have subroutines to patch repeated/similar instructions, check that patched instruction is what I want, patch attributes/relocs etc
In other words, I need full-fledged PL. Although I've read books series "modern compiler implementation" from Andrew Appel and "crafting interpreters" I think making my own PL is overkill, so I made several XS modules to edit/patch CUBIN files for Perl. Why Perl?
  • I am able to write on it almost all I want
  • when I can't - I can always to develop my own module(s)
  • yet I don't feel sick like from pseudo languages like python
  • and it damn good and fast when you try to sketch out prototypes for things you have no idea how to make

 

ELF::FatBinary

for extracting/replacing CUBIN files from FatBinaries
see details here


Cubin::Ced 

In essence this is wrapper around Ced - it allows you to disasm/patch SASS instructions
Currently it don't support registers tracking
See doc in POD format 


Cubin::Attrs

Module to extract/patch attributes of CUBIN files + also relocs
doc in POD format

Sample

среда, 1 октября 2025 г.

addresses of cuda kernel functions

 Quote from official document:

It is not allowed to take the address of a __device__ function in host code

I haven't been surprised for a long time that entire CUDA is made up of ridiculous restrictions. What if I told you that paranoid nvidia lies as usually and actually you can get addresses of kernel functions in your host code?

But first lets check what workarounds we can employ to have functions pointers. I don't know for what pedagogical purpose this code intentionally was written so poorly and does not free the allocated memory - and now millions of brainless artificial idiots will copy-paste it forever, so I made patched version. You can realize that attempt to read from early gathered with cudaMemcpyFromSymbol addresses will results error 1 (invalid argument)

Ok. but we could just return address of function directly from another kernel function, right? So I made quick & dirty hack
I brute-forced all combinations of cf1(__device__/__constant__) & variants of cudaMemcpyFromSymbol/cudaMemcpy - and with no luck
So it's time to run

cuda-gdb