开发者

MongoDB C# Driver doesn't release connections then errors

开发者 https://www.devze.com 2023-04-05 14:33 出处:网络
I\'m using the latest versions of MongoDB (on a Win 64 Server) and the C# driver. I have a windows service that is doing 800 reads and updates per minute, and after a few minutes the current threads u

I'm using the latest versions of MongoDB (on a Win 64 Server) and the C# driver. I have a windows service that is doing 800 reads and updates per minute, and after a few minutes the current threads used goes above 200 and then every single mongodb call gives this error:

System.IO.IOException: Unable to read data from the transport connection: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. ---> System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
   at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)
   at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)

I have an index on the fields that is reading by so that's not the issue. Here is the code for the read:

public static UserUpdateMongo Find(int userId, long deviceId)
{开发者_JAVA百科
    return Collection().Find(
        Query.And(
            Query.EQ("UserId", userId),
            Query.EQ("DeviceId", deviceId))).FirstOrDefault();
}

I instantiate the connection like so:

var settings = new MongoServerSettings
{
    Server = new MongoServerAddress(segments[0], Convert.ToInt32(segments[1])),MaxConnectionPoolSize = 1000};
    Server = MongoServer.Create(settings);
}

Am I doing something wrong or is there an issue with the C# driver? Help!!


The C# driver has a connection pool, and the maximum size of the connection pool is 100 by default. So you should never see more than 100 connections to mongod from a single C# client process. The 1.1 version of the C# driver did have an occasional problem under heavy load, where an error on one connection could result in a storm of disconnects and connects. You would be able to tell if that was happening to you by looking at the server logs, where a log entry is written every time a connection is opened or closed. If so, can you try the 1.2 C# driver that was released this week?

You should not have needed to create a queue of pending updates. The connection pool acts as a queue of sorts by limiting the number of concurrent requests.

Let me know if you can find anything in the server logs, and if there is anything further I can help you with.


The solution was to stop saving records on each individual thread and to start adding them to a "pending to save" list in memory. Then have a separate thread and that handles all saves to mongodb synchronously. I don't know why the async calls cause the C# driver to trip up, but this is working beautifully now. Here is some sample code if others run into this problem:

public static class UserUpdateSaver
    {
        public static List<UserUpdateView> PendingUserUpdates;

        public static void Initialize()
        {
            PendingUserUpdates = new List<UserUpdateView>();
            var saveUserUpdatesTime = Convert.ToInt32(ConfigurationBL.ReadApplicationValue("SaveUserUpdatesTime"));
            LogWriter.Write("Setting up timer to save user updates every " + saveUserUpdatesTime + " seconds", LoggingEnums.LogEntryType.Warning);
            var worker = new BackgroundWorker();
            worker.DoWork += delegate(object s, DoWorkEventArgs args)
            {
                while (true)
                {//process pending user updates every x seconds.
                    Thread.Sleep(saveUserUpdatesTime * 1000);
                    ProcessPendingUserUpdates();
                }
            };
            worker.RunWorkerAsync();
        }

        public static void AddUserUpdateToSave(UserUpdateView userUpdate)
        {
            Monitor.Enter(PendingUserUpdates);
            PendingUserUpdates.Add(userUpdate);
            Monitor.Exit(PendingUserUpdates);
        }

        private static void ProcessPendingUserUpdates()
        {
            //get pending user updates.
            var pendingUserUpdates = new List<UserUpdateView>(PendingUserUpdates);
            if (pendingUserUpdates.Count > 0)
            {
                var startDate = DateTime.Now;

                foreach (var userUpdate in pendingUserUpdates)
                {
                    try
                    {
                        UserUpdateStore.Update(userUpdate);
                    }
                    catch (Exception exc)
                    {
                        LogWriter.WriteError(exc);
                    }
                    finally
                    {
                        Monitor.Enter(PendingUserUpdates);
                        PendingUserUpdates.Remove(userUpdate);
                        Monitor.Exit(PendingUserUpdates);
                    }
                }

                var duration = DateTime.Now.Subtract(startDate);
                LogWriter.Write(String.Format("Processed {0} user updates in {1} seconds",
                    pendingUserUpdates.Count, duration.TotalSeconds), LoggingEnums.LogEntryType.Warning);
            }
            else
            {
                LogWriter.Write("No user updates to process", LoggingEnums.LogEntryType.Warning);
            }
        }
    }


Have you heard about Message Queueing? You could put a bunch of boxes to handle such load and use message queueing mechanism to save your data to mongodb. But, in this case, your message queue must be able to run concurrent publish subscribe. A free message queue (very good in my opinion) is MassTransit with RabbitMQ.

The workflow would be: 1. Publish your data in message queue; 2. Once its there, launch as many boxes as you want with the subscribers that saves and processes your mongo data.

This approach will be good if you need to scale.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号