开发者

measure time in a function in C

开发者 https://www.devze.com 2023-04-05 09:10 出处:网络
I\'m开发者_StackOverflow debugging an C application and I\'d like to know how much time it spends in a particular function.

I'm开发者_StackOverflow debugging an C application and I'd like to know how much time it spends in a particular function.

I could change the source code and add some more code to do the measurement, but it doesn't seem right to me. I'd rather do it with external application, without recompiling every time.

I found out it's possible to set up a break point in GDB so I thought, it must be possible to track the time using similar tool by simple procedure: - set breakpoint - when stopped, measure actual time and run the function - when leaving function, measure time again However, i haven't found a way how to do this in gdb :(

any ideas? Thanks


If you're using GCC, you want the compile option "-pg" and the application gprof.


gprof is only reliable -- in my experience, only works at all -- if you statically link -pg compiled versions of every library, including the C library. You can try to do this using gcc's -profile option (which does what -pg does plus tries to sub in -pg libraries) but the problem is, GNU libc really does not like being statically linked, and your distro may not provide -pg compiled versions of every library you need.

I suggest you try cachegrind, which is a valgrind mode of operation and only needs debug info for everything. That's much easier to get. The catch is, it has huge overhead costs; so huge that it might invalidate your testing. Expect at least a 2x slowdown.

You can also try perf -- if you can get your hands on a copy. It's very clever, but it is of, by, and for kernel hackers who think people like building stuff from scratch. I have had very mixed luck with it. (Do read http://web.eecs.utk.edu/~vweaver1/projects/perf-events/ which is about the underlying API, not the utility, but might still save you from a great deal of wasted time.)


I have a helper function in my ~/.gdbinit:

define timeme
set $last=clock()
n
set $timing=clock() - $last
if $timing>$arg0
printf "***long***\n"
end
printf "%d cycles, %f seconds\n", $timing, (float)$timing / 1000000
end

You may need to adjust the 1000000 depending on what your implementation of CLOCKS_PER_SEC is on your platform.

Usage is trivial; run the helper which will execute the next step and give timing information:

Breakpoint 2, install_new_payload_from_meta (snmp_meta=0x7eee81c0, pkt=0x0, entry=0x7d4f4e58) at /home/sgillibr/savvi-dc-snmp/recipies.c:187
(gdb) timeme 100000
***long***
580000 cycles, 0.580000 seconds
(gdb)

Obviously resolution may not be enough for some needs, although it does prove very useful.


Put this into your ~/.gdbinit

define timeme
    python import time
    python starttime=time.time()
    next
    python print("Previous takes: " + (str)(time.time()-starttime) + "s")
end
document timeme
    Measure executing time of next function
    Usage: timeme or ti
end

type timeme or ti when you want to measure the time of next function.


Profiling is probably what you want. Take a look at prof or gprof.

UPDATE: After compilng with "cc -Wall -ggdb -pg -g3 -O2 diskhash.c -o diskhash" ( and running the program), "gprof -p diskhash" gives me:

Each sample counts as 0.01 seconds.
  %   cumulative   self              self     total           
 time   seconds   seconds    calls  ms/call  ms/call  name    
 32.60      0.41     0.41        1   410.75   646.18  create_hashtab
 31.80      0.81     0.40  5087692     0.00     0.00  hash_func
 27.83      1.16     0.35  2543846     0.00     0.00  find_hash
  2.78      1.20     0.04  2543846     0.00     0.00  chop_a_line
  1.59      1.22     0.02                             main
  0.40      1.22     0.01                             frame_dummy
  0.00      1.22     0.00        4     0.00     0.00  map_da_file
0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号