HWM management on publisher side

Description

> I am using ZMQ 3.0.x on linux boxes with the PUB/SUB pattern.
> I have only one subscriber which is very slow. It needs 1 second every
> time a message is read.
> I have a HWM on the publisher side set to 10.
>
> In my message, I have a counter which is incremented for each message.
> My messages are relatively small (150 bytes)
> I have a print of date each time the publisher sends a message
> (gettimeofday)
> I also have on the same host where the publisher is running a wireshark
> tool which captures network packets.
>
> With wireshark, I see that ZMQ drops messages number 11 to 39. I don't
> understand why.
> All the previous messages (number 1 to 10) have been sent on the network
> because I see them on wireshark
> The time reported by wireshark is coherent with the time printed by the
> publisher.
>
> Message 8 sent by publisher at xxx623,205547
> Message 8 seen by wireshark at xxx623,205556
>
> Message 9 sent by publisher at xxx623,205575
> Message 9 seen by wireshark at xxx623,205584
>
> Message 10 sent by publisher at xxx623,205603
> Message 10 seen by wireshark at xxx623,205611
>
> Message 11 sent by publisher at xxx623,205629
> Message 12 sent by publisher at xxx623,205654
> Message 13 sent by publisher at xxx623,205704
> Message 14 sent by publisher at xxx623,205729
> ....
>
> These messages are not seen by wireshark because I guess ZMQ took the
> decision to drop them.
> But why it took that decision? I don't think there are messages in the
> queue because I have seen them on
> the wire!
>

From what I have understood, the problem is the following.
The pipe used for communication between my application subscriber
thread and the zmq I/O thread is effectively marked as
full (msgs_written - peers_msgs_read == uint64_t (hwm) in pipe.cpp file
check_write method) even if I have seen my messages on the wire (shown
by wireshark).
The I/O thread has effectively sent the messages on the wire and it has
sent the "activate_write" command to the pipe. When the subscriber
thread sends a message, it processes the pipe command list (method
socket_base_t:rocess_commands()) but the "activate_write" commands are
(they are several) not executed immediately.
This is due to the code in this socket_base_t:rocess_commands() method
just before the loop processing the
commands. There is some code to optimize commands processing which
takes the decision to return from this
method before the commands are processed. If I comment out this part of
the code, things works much better and I do not notice dropped messages.

Environment

Linux boxes (Ubuntu)

Attachments

2

Activity

Show:

Martin Hurton July 6, 2012 at 9:50 AM

I think there is a link between those issues. Would you mind prepare a patch for this? Thanks!

taurelf June 7, 2012 at 12:07 PM

I am not able to re-produce this issue with release 3.2.0 rc1. But I have to say that I have updated my hardware platform to a brand new I7 computer. Because the issue was related to some optimization, I do not notice the problem with this new hardware. My previous computer was quite old (more than 5 years). Anyway, is there a link between the problem reported here and this thread (and patch) in the crossroad mailing list

http://groups.crossroads.io/groups/crossroads-dev/messages/topic/jJ5x65jr3FTaiZ8Fjwqe5#post-iZXqUrsBSwluZpzjCYA6l

I have asked the question on the crossroads mailing list but got no answer

Cheers

Martin Hurton May 31, 2012 at 10:22 AM

Exactly. There is an optimization, little time that we ignore signal from consumer that producer can write more message on socket.
You may want to tune max_command_delay option in config.hpp for your use case, if you are willing to trade some throughput and latency.
Let us know if this works for you.

taurelf May 31, 2012 at 9:22 AM

Hi Martin,

For me, the problem is that ZMQ drops too many messages. ZMQ still thinks that the pipe is full while it is not the case because I have seen the messages on the wire (with wireshark). A message cannot be in the pipe and on the wire!
Because ZMQ thinks the pipe is full, it drops messages while it could transfert them (the pipe is not full).

I guess this is due to some optimization. As I explain in the bug report, if I comment out the optimization
code in this pipe.cpp file, there is much less dropped packets but I guess I am loosing performances somewhere

Cheers

Martin Hurton May 30, 2012 at 9:52 PM

Your analysis is correct.
The HWM can be though of as pipe capacity.
It takes some time for a signal that the pipe can accept some more message to reach the produced.
Why do you think this is a problem?

Cannot Reproduce

Details

Assignee

Reporter

Components

Affects versions

Priority

Created November 30, 2011 at 8:16 AM
Updated August 16, 2012 at 5:34 AM
Resolved August 16, 2012 at 5:34 AM