开发者

WCF/MQSeries: How do you keep a poison message

开发者 https://www.devze.com 2023-02-22 23:53 出处:网络
I have a WebSphere MQ contract that I\'ve setup. I have the queue configured to deliver a message that is retried 5 times to a separate queue for poison processing. I interact with the service using t

I have a WebSphere MQ contract that I've setup. I have the queue configured to deliver a message that is retried 5 times to a separate queue for poison processing. I interact with the service using the WCF Channel provided by WebSphere MQ v 7.01.

While my service is connected, things work perfectly. As soon as the service disconnects, the poison messages reappear in the primary queue. Restarting the service instantly puts the messages back from the primary queue into the poison queue. What do I need to do in order to get the messages to stay in the poison queue after the service disconnects from the queue?

The code is currently in POC mode, so I'm hosting the service on a Window. The class has these attributes:

  [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single)]
  [AssuredDelivery()]
  public partial class MainWindow : Window, IWmqEndpoint

Service contract is:

  [Serv开发者_开发知识库iceContract()]
  public interface IWmqEndpoint
  {
    [OperationContract(IsOneWay = true)]
    void SendMessage(string message);

    [OperationContract(IsOneWay = true)]
    void SendComplex(PersonName name);
  }

The main queue is persistent, otherwise, default settings. Ditto for the poison queue.

Queue setup:

  1. Backout request queue: wcf.inbound.poison
  2. Backout threshold: 5
  3. Harden get backout: Not hardened
  4. NPM class: Normal


When an application reads a message inside of a unit of work, WMQ has no idea whether there are any other messages in the same unit of work. If the message turns out to be a poison message, WMQ requeues it to the backout queue in the same unit of work as everything else. If WMQ were to move the message outside a unit of work, there's a chance it could be lost or duplicated. If WMQ committed the UOW on your behalf then there's a chance it could commit messages you've retrieved but not completely processed. So WMQ makes you explicitly commit between units of work if performing backout processing.

The behavior you are seeing sound like the messages are being held on the backout queue under syncpoint. When the app is shut down an implicit BACKOUT occurs and all the messages go back to their original queue. You need at least one successful unit of work to make the messages stay in the backout queue. Try putting a good message on the input queue behind the bad ones and then execute an explicit COMMIT.

By the way, WebSphere MQ does not have the concept of a persistent queue. Persistence is an attribute of the message. The queue has an attribute DEFPSIST which tells the app whether to make a message persistent or not when the message is first created. Persistence of the message will not change as it travels throughout the WMQ network, regardless of the setting of the queues it passes through. Therefore, DEFPSIST on the backout queue has no effect whatsoever since the message persistence was defined before it hit the backout queue.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号