开发者

Monitoring UDP socket in glib(mm) eats up CPU time

开发者 https://www.devze.com 2023-01-03 07:28 出处:网络
I have a GTKmm Windows application (built with MinGW) that receives UDP packets (no sending). The socket is native winsock and I use glibmm IOChannel to connect it to the application main loop. The so

I have a GTKmm Windows application (built with MinGW) that receives UDP packets (no sending). The socket is native winsock and I use glibmm IOChannel to connect it to the application main loop. The socket is read with recvfrom.

My problem is: this setup eats 25% percent CPU time on a 3GHz workstation. Can somebody tell me why?

The application is idle in this case, and if I remove the UDP code, CPU usage drops down to almost zero. As the application has to perform some CPU intensive tasks, I could image better ways to spend that 25%

Here are some code excerpts: (sorry for the printf's ;) )


/* bind */
void UDPInterface::bindToPort(unsigned short port)
{
    struct sockaddr_in target;
    WSADATA wsaData;

    target.sin_family = AF_INET;
    target.sin_port = htons(port);
    target.sin_addr.s_addr = 0;

    if ( WSAStartup ( 0x0202, &wsaData ) )
    {
        printf("WSAStartup failed!\n");
        exit(0); // :)
        WSACleanup();
    }

    sock = socket( AF_INET, SOCK_DGRAM, 0 );
    if (sock == INVALID_SOCKET)
    {
        printf("invalid socket!\n");
        exit(0);
    }

    if (bind(sock,(struct sockaddr*) &target, sizeof(struct sockaddr_in) ) == SOCKET_ERROR)
    {
        printf("failed to bind to port!\n");
        exit(0);
    }

    printf("[UDPInterface::bindToPort] listening on port %i\n", port);
}

/* read */
bool UDPInterface::UDPEvent(Glib::IOCondition io_condition)
{
    recvfrom(sock, (char*)buf, BUF_SIZE*4, 0, NULL, NULL);
    /* process packet... */
}

/* glibmm connect */
Glib::RefPtr channel =开发者_StackOverflow社区 Glib::IOChannel::create_from_win32_socket(udp.sock);
Glib::signal_io().connect( sigc::mem_fun(udp, &UDPInterface::UDPEvent), channel, Glib::IO_IN );

I've read here in some other question, and also in glib docs (g_io_channel_win32_new_socket()) that the socket is put into nonblocking mode, and it's "a side-effect of the implementation and unavoidable". Does this explain the CPU effect, it's not clear to me?

Whether or not I use glib to access the socket or call recvfrom() directly doesn't seem to make much difference, since CPU is used up before any packet arrives and the read handler gets invoked. Also glibmm docs state that it's ok to call recvfrom() even if the socket is polled (Glib::IOChannel::create_from_win32_socket())

I've tried compiling the program with -pg and created a per function cpu usage report with gprof. This wasn't usefull because the time is not spent in my program, but in some external glib/glibmm dll.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号